metadata
license: mit
base_model:
- Qwen/Qwen3-VL-32B-Instruct
Model Summary
UnifiedReward-2.0-qwen3vl-32b is the first unified reward model based on Qwen/Qwen3-VL-32B-Instruct for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model preference alignment.
For further details, please refer to the following resources:
- π° Paper: https://arxiv.org/pdf/2503.05236
- πͺ Project Page: https://codegoat24.github.io/UnifiedReward/
- π€ Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
- π€ Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
- π Point of Contact: Yibin Wang
π Compared with Current Reward Models
| Reward Model | Method | Image Generation | Image Understanding | Video Generation | Video Understanding |
|---|---|---|---|---|---|
| PickScore | Point | β | |||
| HPS | Point | β | |||
| ImageReward | Point | β | |||
| LLaVA-Critic | Pair/Point | β | |||
| IXC-2.5-Reward | Pair/Point | β | β | ||
| VideoScore | Point | β | |||
| LiFT | Point | β | |||
| VisionReward | Point | β | β | ||
| VideoReward | Point | β | |||
| UnifiedReward (Ours) | Pair/Point | β | β | β | β |
Citation
@article{unifiedreward,
title={Unified reward model for multimodal understanding and generation},
author={Wang, Yibin and Zang, Yuhang and Li, Hao and Jin, Cheng and Wang, Jiaqi},
journal={arXiv preprint arXiv:2503.05236},
year={2025}
}