Revisiting Multimodal Positional Encoding in Vision-Language Models
Abstract
A comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) leads to the proposal of Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), which improve multimodal understanding in vision-language models.
Multimodal position encoding is essential for vision-language models, yet there has been little systematic investigation into multimodal position encoding. We conduct a comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) by examining its two core components: position design and frequency allocation. Through extensive experiments, we identify three key guidelines: positional coherence, full frequency utilization, and preservation of textual priors-ensuring unambiguous layout, rich representation, and faithful transfer from the pre-trained LLM. Based on these insights, we propose Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), two simple and plug-and-play variants that require no architectural changes. Our methods consistently outperform existing approaches across diverse benchmarks, with significant improvements in both general and fine-grained multimodal understanding. Code will be avaliable at https://github.com/JJJYmmm/Multimodal-RoPEs.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Positional Preservation Embedding for Multimodal Large Language Models (2025)
- Head-wise Adaptive Rotary Positional Encoding for Fine-Grained Image Generation (2025)
- Improving GUI Grounding with Explicit Position-to-Coordinate Mapping (2025)
- AttAnchor: Guiding Cross-Modal Token Alignment in VLMs with Attention Anchors (2025)
- From Pixels to Words -- Towards Native Vision-Language Primitives at Scale (2025)
- HoPE: Hyperbolic Rotary Positional Encoding for Stable Long-Range Dependency Modeling in Large Language Models (2025)
- Visual Representation Alignment for Multimodal Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
