Papers
arXiv:2509.18095

MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interaction

Published on Sep 22
· Submitted by Zilin Xiao on Sep 23
Authors:
,
,
,
,
,

Abstract

MetaEmbed, a new framework for multimodal retrieval, uses learnable Meta Tokens to provide compact yet expressive multi-vector embeddings, enabling scalable and efficient retrieval performance.

AI-generated summary

Universal multimodal embedding models have achieved great success in capturing semantic relevance between queries and candidates. However, current methods either condense queries and candidates into a single vector, potentially limiting the expressiveness for fine-grained information, or produce too many vectors that are prohibitively expensive for multi-vector retrieval. In this work, we introduce MetaEmbed, a new framework for multimodal retrieval that rethinks how multimodal embeddings are constructed and interacted with at scale. During training, a fixed number of learnable Meta Tokens are appended to the input sequence. At test-time, their last-layer contextualized representations serve as compact yet expressive multi-vector embeddings. Through the proposed Matryoshka Multi-Vector Retrieval training, MetaEmbed learns to organize information by granularity across multiple vectors. As a result, we enable test-time scaling in multimodal retrieval, where users can balance retrieval quality against efficiency demands by selecting the number of tokens used for indexing and retrieval interactions. Extensive evaluations on the Massive Multimodal Embedding Benchmark (MMEB) and the Visual Document Retrieval Benchmark (ViDoRe) confirm that MetaEmbed achieves state-of-the-art retrieval performance while scaling robustly to models with 32B parameters.

Community

Paper author Paper submitter

Meta Superintelligence Labs presents MetaEmbed: Scalable multimodal retrieval

• Flexible late interaction via Meta Tokens
• Test-time scaling: trade off retrieval accuracy vs efficiency
• SOTA on MMEB + ViDoRe, robust up to 32B models
• Matryoshka training → coarse-to-fine multi-vector embeddings

Summary from @arankomatsuzaki , many thanks!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.18095 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.18095 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.18095 in a Space README.md to link it from this page.

Collections including this paper 2