Papers
arxiv:2511.15065

Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks

Published on Nov 19
· Submitted by taesiri on Nov 20
#2 Paper of the day
Authors:
,
,
,
,
,
,
,

Abstract

VR-Bench evaluates video models' spatial reasoning capabilities through maze-solving tasks, demonstrating that these models excel in spatial perception and reasoning, outperforming VLMs and benefiting from diverse sampling during inference.

AI-generated summary

Video Models have achieved remarkable success in high-fidelity video generation with coherent motion dynamics. Analogous to the development from text generation to text-based reasoning in language modeling, the development of video models motivates us to ask: Can video models reason via video generation? Compared with the discrete text corpus, video grounds reasoning in explicit spatial layouts and temporal continuity, which serves as an ideal substrate for spatial reasoning. In this work, we explore the reasoning via video paradigm and introduce VR-Bench -- a comprehensive benchmark designed to systematically evaluate video models' reasoning capabilities. Grounded in maze-solving tasks that inherently require spatial planning and multi-step reasoning, VR-Bench contains 7,920 procedurally generated videos across five maze types and diverse visual styles. Our empirical analysis demonstrates that SFT can efficiently elicit the reasoning ability of video model. Video models exhibit stronger spatial perception during reasoning, outperforming leading VLMs and generalizing well across diverse scenarios, tasks, and levels of complexity. We further discover a test-time scaling effect, where diverse sampling during inference improves reasoning reliability by 10--20%. These findings highlight the unique potential and scalability of reasoning via video for spatial reasoning tasks.

Community

Paper submitter

Video Models have achieved remarkable success in high-fidelity video generation with coherent motion dynamics. Analogous to the development from text generation to text-based reasoning in language modeling, the development of video models motivates us to ask: Can video models reason via video generation? Compared with the discrete text corpus, video grounds reasoning in explicit spatial layouts and temporal continuity, which serves as an ideal substrate for spatial reasoning. In this work, we explore the reasoning via video paradigm and introduce VR-Bench—a comprehensive benchmark designed to systematically evaluate video models' reasoning capabilities. Grounded in maze-solving tasks that inherently require spatial planning and multi-step reasoning, VR-Bench contains 7,920 procedurally generated videos across five maze types and diverse visual styles. Our empirical analysis demonstrates that SFT can efficiently elicit the reasoning ability of video model. % In spatial reasoning task, Video model outperforms leading VLMs and generalize well across diverse scenarios, tasks, and levels of complexity. Video models exhibit stronger spatial perception during reasoning, outperforming leading VLMs and generalizing well across diverse scenarios, tasks, and levels of complexity. We further discover a test-time scaling effect, where diverse sampling during inference improves reasoning reliability by 10–20%. These findings highlight the unique potential and scalability of reasoning via video for spatial reasoning tasks.

We have constructed a comprehensive evaluation system that facilitates objective and quantifiable assessment by tracking target trajectories in generated videos and comparing them against the ground truth. A distinctive feature of our benchmark is its ability to provide explicit reward signals for future paradigms, such as Video RL.

As posited by Jason's 'Verifier Rule,' the feasibility of training AI for a task is directly proportional to the task's verifiability. Characterized by its objectivity, rapid and scalable verification, and low noise, VR-Bench serves as a highly effective benchmark within this framework.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.15065 in a Space README.md to link it from this page.

Collections including this paper 1