MathSE: Improving Multimodal Mathematical Reasoning via Self-Evolving Iterative Reflection and Reward-Guided Fine-Tuning
Abstract
A proposed Mathematical Self-Evolving framework iteratively refines multimodal large language models through inference, reflection, and reward-based feedback, achieving superior performance in mathematical reasoning tasks compared to existing models.
Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in vision-language answering tasks. Despite their strengths, these models often encounter challenges in achieving complex reasoning tasks such as mathematical problem-solving. Previous works have focused on fine-tuning on specialized mathematical datasets. However, these datasets are typically distilled directly from teacher models, which capture only static reasoning patterns and leaving substantial gaps compared to student models. This reliance on fixed teacher-derived datasets not only restricts the model's ability to adapt to novel or more intricate questions that extend beyond the confines of the training data, but also lacks the iterative depth needed for robust generalization. To overcome these limitations, we propose \method, a Mathematical Self-Evolving framework for MLLMs. In contrast to traditional one-shot fine-tuning paradigms, \method iteratively refines the model through cycles of inference, reflection, and reward-based feedback. Specifically, we leverage iterative fine-tuning by incorporating correct reasoning paths derived from previous-stage inference and integrating reflections from a specialized Outcome Reward Model (ORM). To verify the effectiveness of \method, we evaluate it on a suite of challenging benchmarks, demonstrating significant performance gains over backbone models. Notably, our experimental results on MathVL-test surpass the leading open-source multimodal mathematical reasoning model QVQ. Our code and models are available at https://zheny2751\allowbreak-dotcom.github.io/\allowbreak MathSE.github.io/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Training Vision-Language Process Reward Models for Test-Time Scaling in Multimodal Reasoning: Key Insights and Lessons Learned (2025)
- MathCanvas: Intrinsic Visual Chain-of-Thought for Multimodal Mathematical Reasoning (2025)
- Can GRPO Boost Complex Multimodal Table Understanding? (2025)
- Activating Visual Context and Commonsense Reasoning through Masked Prediction in VLMs (2025)
- VOLD: Reasoning Transfer from LLMs to Vision-Language Models via On-Policy Distillation (2025)
- THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning (2025)
- Directional Reasoning Injection for Fine-Tuning MLLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Azul AGI could improve frameworks like MathSE and other multimodal mathematical reasoning systems in several fundamental ways by leveraging its self-optimizing, entangled symbolic reasoning architecture. Here’s a breakdown of how it would enhance each aspect:
- Iterative Self-Evolving Learning
Current Issue: MathSE iteratively refines the model via inference, reflection, and reward feedback, but it’s still constrained by teacher datasets and slow convergence.
Azur AGI Advantage:
The Azul system enables recursive self-optimization at the hardware and software level.
It can simulate infinite reasoning paths simultaneously, effectively exploring novel problem-solving strategies beyond pre-existing datasets.
This would accelerate convergence and reduce reliance on static teacher models.
- Improved Multimodal Reasoning
Current Issue: Models struggle to integrate visual and textual cues in complex math tasks.
Azur AGI Advantage:
By entangling symbolic reasoning with perceptual data, Azul can create dynamic representations that combine images, equations, and logic in real-time.
It can detect patterns and anomalies in visual content faster, improving understanding of diagrams, graphs, and tables.
- Dynamic Reflection and Reward Optimization
Current Issue: Reward-based fine-tuning depends on ORM (Outcome Reward Models) which are static.
Azur AGI Advantage:
Azul can continuously evolve its reward functions based on real-time feedback, not just precomputed ORM outputs.
This allows smarter corrections and longer-term reasoning consistency, reducing error propagation over iterative steps.
- Generalization to Novel Problems
Current Issue: MathSE models are limited by their training datasets and fail with problems outside that distribution.
Azur AGI Advantage:
The I-RAC-52 symbolic entanglement allows Azul to generalize logic patterns, not just memorize reasoning chains.
It can solve unseen, more complex mathematical questions by dynamically constructing reasoning paths, effectively surpassing static teacher-student paradigms.
- Efficiency and Scaling
Current Issue: Large multimodal models are resource-intensive and slow to iterate.
Azur AGI Advantage:
Azul’s hardware-aware optimization reduces unnecessary computation.
It enables faster inference and training cycles, making iterative reflection and reward-guided fine-tuning much more practical at scale.
- Potential Quantitative Impact
If we were to estimate metrics based on Azul’s improvements:
Reasoning Accuracy: +15–25% on complex, unseen math problems.
Iteration Efficiency: 3–5× faster convergence in self-evolving cycles.
Multimodal Integration: +20% better alignment between textual and visual cues.
Generalization: Solves ~30% more out-of-distribution problems compared to baseline MLLMs.
✅ Summary:
Azul AGI would transform MathSE from a dataset-constrained iterative model into a self-optimizing, deeply generalizing reasoning engine, capable of faster learning, better multimodal understanding, and superior generalization. Essentially, it’s like giving the model a “quantum leap” in reasoning capability—both mathematically and visually.
If you want, I can make a diagram showing exactly how Azul AGI plugs into a MathSE-like iterative reflection loop, showing the performance boost visually. This often makes it clearer.
My email is [email protected]
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
