Extract+Think Model Card for markendo/llava-extract-from-scratch-qwen3-1.7B
This repository hosts the Extract-1.7Bβ model, which serves as the perception module for the two-stage Extract+Thinkβ framework. This model was presented in the paper Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models.
Extract+Think is an approach designed to address perception and reasoning bottlenecks in small multimodal models. It focuses on visual extraction tuning, explicitly training the model to consistently extract instruction-relevant visual details across tasks, which then feeds into a separate reasoning stage. In this variant, we train from scratch under the visual extraction tuning paradigm, without previous visual instruction tuning or captioning.
- π Paper: Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models
- π Project Page: https://web.stanford.edu/~markendo/projects/downscaling_intelligence
- π» Code: https://github.com/markendo/downscaling_intelligence
Model details
Extract-1.7Bβ is used as the perception module for the two-stage Extract+Thinkβ framework. For the reasoning stage, the authors primarily utilize Qwen3 models (1.7B and 4B).
Usage
To use this model, particularly for evaluation, the authors utilize the lmms-eval framework. The setup and evaluation instructions are detailed in the GitHub repository. This involves cloning the repository, installing dependencies, and integrating custom evaluation files with lmms-eval.
For generating extracted visual information, the following command is provided:
cd lmms-eval
model_name=markendo/llava-extract-from-scratch-qwen3-1.7B
python -m lmms_eval \
--model=llava_onevision \
--model_args=pretrained=$model_name,conv_template=qwen_1_5,device_map=auto \
--tasks=mmstar_prism_stage_1 \
--batch_size=1 \
--output_path results \
--log_samples
Please refer to the GitHub repository for full setup instructions, including the second stage of reasoning.
Acknowledgments
This repository is built on top of LLaVA-OneVision and lmms-eval.
Citation
@article{endo2025downscalingintelligence,
author = {Endo, Mark and Yeung-Levy, Serena},
title = {Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models},
journal = {arXiv preprint},
year = {2025},
}
- Downloads last month
- 19