CodeGoat24 commited on
Commit
96c5e8e
Β·
verified Β·
1 Parent(s): 68cb2bc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - Qwen/Qwen3-VL-32B-Instruct
5
+ ---
6
+
7
+
8
+ ## Model Summary
9
+
10
+ `UnifiedReward-2.0-qwen3vl-32b` is the first unified reward model based on [Qwen/Qwen3-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-32B-Instruct) for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model preference alignment.
11
+
12
+ For further details, please refer to the following resources:
13
+ - πŸ“° Paper: https://arxiv.org/pdf/2503.05236
14
+ - πŸͺ Project Page: https://codegoat24.github.io/UnifiedReward/
15
+ - πŸ€— Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
16
+ - πŸ€— Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
17
+ - πŸ‘‹ Point of Contact: [Yibin Wang](https://codegoat24.github.io)
18
+
19
+
20
+ ## 🏁 Compared with Current Reward Models
21
+
22
+ | Reward Model | Method| Image Generation | Image Understanding | Video Generation | Video Understanding
23
+ | :-----: | :-----: |:-----: |:-----: | :-----: | :-----: |
24
+ | [PickScore](https://github.com/yuvalkirstain/PickScore) |Point | √ | | ||
25
+ | [HPS](https://github.com/tgxs002/HPSv2) | Point | √ | |||
26
+ | [ImageReward](https://github.com/THUDM/ImageReward) | Point| √| |||
27
+ | [LLaVA-Critic](https://huggingface.co/lmms-lab/llava-critic-7b) | Pair/Point | | √ |||
28
+ | [IXC-2.5-Reward](https://github.com/InternLM/InternLM-XComposer) | Pair/Point | | √ ||√|
29
+ | [VideoScore](https://github.com/TIGER-AI-Lab/VideoScore) | Point | | |√ ||
30
+ | [LiFT](https://github.com/CodeGoat24/LiFT) | Point | | |√| |
31
+ | [VisionReward](https://github.com/THUDM/VisionReward) | Point |√ | |√||
32
+ | [VideoReward](https://github.com/KwaiVGI/VideoAlign) | Point | | |√ ||
33
+ | UnifiedReward (Ours) | Pair/Point | √ | √ |√|√|
34
+
35
+
36
+ ## Citation
37
+
38
+ ```
39
+ @article{unifiedreward,
40
+ title={Unified reward model for multimodal understanding and generation},
41
+ author={Wang, Yibin and Zang, Yuhang and Li, Hao and Jin, Cheng and Wang, Jiaqi},
42
+ journal={arXiv preprint arXiv:2503.05236},
43
+ year={2025}
44
+ }
45
+ ```