CodeGoat24 commited on
Commit
383851e
Β·
verified Β·
1 Parent(s): eb86d9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -1,3 +1,36 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - CodeGoat24/UnifiedReward-2.0-qwen-7b
5
+ ---
6
+
7
+ # UnifiedReward-Edit-qwen-7B
8
+ [2025/10/23] πŸ”₯πŸ”₯πŸ”₯ We release **UnifiedReward-Edit**-7b, a unified reward model for **both Text-to-Image and Image-to-Image generation**!!
9
+ For image editing reward task, our models support:
10
+
11
+ >1. Pairwise Rank β€” directly judge which of two edited images is better.
12
+ >
13
+ >2. Pairwise Score β€” assign a separate score to each image in a pair.
14
+ >
15
+ >3. Pointwise Score β€” rate a single image on two axes: instruction-following and overall image quality.
16
+
17
+ πŸš€ The image editing reward inference code is available at [`UnifiedReward-Edit/`](https://github.com/CodeGoat24/UnifiedReward/tree/main/UnifiedReward-Edit) directory, while T2I inference code is unchanged from previous models. The editing training data is preprocessed from [EditScore](https://huggingface.co/datasets/EditScore/EditScore-Reward-Data) and [EditReward](https://huggingface.co/datasets/TIGER-Lab/EditReward-Data) and will be released soon. We sincerely appreciate all contributors!!
18
+
19
+ For further details, please refer to the following resources:
20
+ - πŸ“° Paper: https://arxiv.org/pdf/2503.05236
21
+ - πŸͺ Project Page: https://codegoat24.github.io/UnifiedReward/
22
+ - πŸ€— Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
23
+ - πŸ€— Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
24
+ - πŸ‘‹ Point of Contact: [Yibin Wang](https://codegoat24.github.io)
25
+
26
+
27
+ ## Citation
28
+
29
+ ```
30
+ @article{unifiedreward,
31
+ title={Unified reward model for multimodal understanding and generation},
32
+ author={Wang, Yibin and Zang, Yuhang and Li, Hao and Jin, Cheng and Wang, Jiaqi},
33
+ journal={arXiv preprint arXiv:2503.05236},
34
+ year={2025}
35
+ }
36
+ ```