--- library_name: transformers license: apache-2.0 datasets: - aimagelab/ReT-M2KR base_model: - openai/clip-vit-large-patch14 pipeline_tag: visual-document-retrieval --- # Model Card: ReT-2 Official implementation of ReT-2: Recurrence Meets Transformers for Universal Multimodal Retrieval. This model features visual and textual backbones based on [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14).
The backbones have been fine-tuned on the M-BEIR dataset. ### Model Sources - **Repository:** https://github.com/aimagelab/ReT-2 - **Paper:** [Recurrence Meets Transformers for Universal Multimodal Retrieval](https://arxiv.org/abs/2509.08897) ### Training Data [TIGER-Lab/M-BEIR](https://huggingface.co/datasets/TIGER-Lab/M-BEIR) ## Citation ``` @article{caffagni2025recurrencemeetstransformers, title={{Recurrence Meets Transformers for Universal Multimodal Retrieval}}, author={Davide Caffagni and Sara Sarto and Marcella Cornia and Lorenzo Baraldi and Rita Cucchiara}, journal={arXiv preprint arXiv:2509.08897}, year={2025} } ```