Extract+Think Model Card for markendo/llava-extract-from-scratch-qwen3-1.7B

This repository hosts the Extract-1.7B† model, which serves as the perception module for the two-stage Extract+Think† framework. This model was presented in the paper Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models.

Extract+Think is an approach designed to address perception and reasoning bottlenecks in small multimodal models. It focuses on visual extraction tuning, explicitly training the model to consistently extract instruction-relevant visual details across tasks, which then feeds into a separate reasoning stage. In this variant, we train from scratch under the visual extraction tuning paradigm, without previous visual instruction tuning or captioning.

Model details

Extract-1.7B† is used as the perception module for the two-stage Extract+Think† framework. For the reasoning stage, the authors primarily utilize Qwen3 models (1.7B and 4B).

Usage

To use this model, particularly for evaluation, the authors utilize the lmms-eval framework. The setup and evaluation instructions are detailed in the GitHub repository. This involves cloning the repository, installing dependencies, and integrating custom evaluation files with lmms-eval.

For generating extracted visual information, the following command is provided:

cd lmms-eval
model_name=markendo/llava-extract-from-scratch-qwen3-1.7B
python -m lmms_eval \
    --model=llava_onevision \
    --model_args=pretrained=$model_name,conv_template=qwen_1_5,device_map=auto \
    --tasks=mmstar_prism_stage_1 \
    --batch_size=1 \
    --output_path results \
    --log_samples

Please refer to the GitHub repository for full setup instructions, including the second stage of reasoning.

Acknowledgments

This repository is built on top of LLaVA-OneVision and lmms-eval.

Citation

@article{endo2025downscalingintelligence,
  author    = {Endo, Mark and Yeung-Levy, Serena},
  title     = {Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models},
  journal   = {arXiv preprint},
  year      = {2025},
}
Downloads last month
19
Safetensors
Model size
2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for markendo/llava-extract-from-scratch-qwen3-1.7B

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(337)
this model

Collection including markendo/llava-extract-from-scratch-qwen3-1.7B