EN | 中文
SenseNova-SI: Scaling Spatial Intelligence with Multimodal Foundation Models
🔥Please check out our newly released SenseNova-SI-1.1-InternVL3-2B and SenseNova-SI-1.1-InternVL3-8B.
⏳The current model will be deprecated in due course.
Overview
Despite remarkable progress, leading multimodal models still exhibit notable deficiencies in spatial intelligence: the ability to make metric estimations, understand spatial relationships, handle viewpoint changes, and integrate information across complex scenes. We take a scaling perspective: constructing and curating a large-scale, comprehensive collection of spatial intelligence data, and through continued training on powerful multimodal foundations, cultivating multi-faceted spatial understanding within the SenseNova-SI family of models. In the future, SenseNova-SI will be integrated with larger-scale in-house models.
Release Information
Currently, we build SenseNova-SI upon popular open-source foundation models to maximize compatibility with existing research pipelines. In this release, we present SenseNova-SI-InternVL3-2B and SenseNova-SI-InternVL3-8B, which achieve state-of-the-art performance among open-source models of comparable size across four recent spatial intelligence benchmarks: VSI, MMSI, MindCube, and ViewSpatial.
| Model | VSI | MMSI | MindCube-Tiny | ViewSpatial | SITE |
|---|---|---|---|---|---|
| Open-source Models (~2B) | |||||
| InternVL3-2B | 32.9 | 26.5 | 37.5 | 32.5 | 30.0 |
| Qwen3-VL-2B-Instruct | 50.3 | 28.9 | 34.5 | 36.9 | 35.6 |
| MindCube-3B-RawQA-SFT | 17.2 | 1.7 | 51.7 | 24.1 | 6.3 |
| SpatialLadder-3B | 44.8 | 27.4 | 43.4 | 39.8 | 27.9 |
| SpatialMLLM-4B | 46.3 | 26.1 | 33.4 | 34.6 | 18.0 |
| VST-3B-SFT | 57.9 | 30.2 | 35.9 | 52.8 | 35.8 |
| Cambrian-S-3B | 57.3 | 25.2 | 32.5 | 39.0 | 28.3 |
| SenseNova-SI-1.1-InternVL3-2B | 63.7 | 34.2 | 41.8 | 52.6 | 36.7 |
| Open-source Models (~8B) | |||||
| InternVL3-8B | 42.1 | 28.0 | 41.5 | 38.6 | 41.1 |
| Qwen3-VL-8B-Instruct | 57.9 | 31.1 | 29.4 | 42.2 | 45.8 |
| BAGEL-7B-MoT | 31.4 | 31.0 | 34.7 | 41.3 | 37.0 |
| SpaceR-7B | 41.5 | 27.4 | 37.9 | 35.8 | 34.2 |
| ViLaSR-7B | 44.6 | 30.2 | 35.1 | 35.7 | 38.7 |
| VST-7B-SFT | 60.6 | 32.0 | 39.7 | 50.5 | 39.6 |
| Cambrian-S-7B | 67.5 | 25.8 | 39.6 | 40.9 | 33.0 |
| SenseNova-SI-1.1-InternVL3-8B | 68.7 | 43.3 | 85.6 | 54.6 | 47.7 |
| Proprietary Models | |||||
| Gemini-2.5-pro-2025-06 | 53.5 | 38.0 | 57.6 | 46.0 | 57.0 |
| Grok-4-2025-07-09 | 47.9 | 37.8 | 63.5 | 43.2 | 47.0 |
| GPT-5-2025-08-07 | 55.0 | 41.8 | 56.3 | 45.5 | 61.8 |
🛠️ QuickStart
Installation
We recommend using uv to manage the environment.
uv installation guide: https://docs.astral.sh/uv/getting-started/installation/#installing-uv
git clone [email protected]:OpenSenseNova/SenseNova-SI.git
cd SenseNova-SI/
uv sync --extra cu124 # or one of [cu118|cu121|cu124|cu126|cu128|cu129], depending on your CUDA version
uv sync
source .venv/bin/activate
Sample Usage
This example demonstrates how to use SenseNova-SI-InternVL3-2B with the Hugging Face Transformers library.
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
import os
# Installation Note:
# To run this example, you first need to clone the original repository
# and set up the environment as described in the "QuickStart" -> "Installation" section above:
#
# git clone [email protected]:OpenSenseNova/SenseNova-SI.git
# cd SenseNova-SI/
# uv sync --extra cu124 # or one of [cu118|cu121|cu124|cu126|cu128|cu129], depending on your CUDA version
# uv sync
# source .venv/bin/activate
#
# Ensure the example images are available locally after cloning the SenseNova-SI repository
# (e.g., in SenseNova-SI/examples/Q1_1.png and SenseNova-SI/examples/Q1_2.png).
# This snippet assumes you are running from the cloned SenseNova-SI/ directory.
# Load the model and processor
model_id = "sensenova/SenseNova-SI-InternVL3-2B" # This model card is for the 2B model
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).eval().cuda()
# Prepare inputs
# Assuming the images are available locally after cloning the SenseNova-SI repository
image_paths = ["./examples/Q1_1.png", "./examples/Q1_2.png"]
images = [Image.open(img_path).convert("RGB") for img_path in image_paths]
question = "<image><image>\
You are standing in front of the dice pattern and observing it. Where is the desk lamp approximately located relative to you?\
Options: A: 90 degrees counterclockwise, B: 90 degrees clockwise, C: 135 degrees counterclockwise, D: 135 degrees clockwise"
# Process inputs
inputs = processor(
text=question,
images=images,
return_tensors="pt"
).to("cuda", torch.bfloat16)
# Generate response
with torch.inference_mode():
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
# Decode and print
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])
Examples
Hello World
A simple image-free test to verify environment setup and download the model.
python example.py \
--question "Hello" \
--model_path sensenova/SenseNova-SI-1.1-InternVL3-8B
Example 1
This example is from the Pos-Obj-Obj subset of MMSI-Bench:
python example.py \
--image_paths examples/Q1_1.png examples/Q1_2.png \
--question "<image><image>
You are standing in front of the dice pattern and observing it. Where is the desk lamp approximately located relative to you?
Options: A: 90 degrees counterclockwise, B: 90 degrees clockwise, C: 135 degrees counterclockwise, D: 135 degrees clockwise" \
--model_path sensenova/SenseNova-SI-1.1-InternVL3-8B
# --model_path OpenGVLab/InternVL3-8B
Details of Example 1
Q:
You are standing in front of the dice pattern and observing it. Where is the desk lamp approximately located relative to you?
Options: A: 90 degrees counterclockwise, B: 90 degrees clockwise, C: 135 degrees counterclockwise, D: 135 degrees clockwise
|
|
GT: C
Example 2
This example is from the Rotation subset of MindCube:
python example.py \
--image_paths examples/Q2_1.png examples/Q2_2.png \
--question "<image><image>
Based on these two views showing the same scene: in which direction did I move from the first view to the second view?
A. Directly left B. Directly right C. Diagonally forward and right D. Diagonally forward and left" \
--model_path sensenova/SenseNova-SI-1.1-InternVL3-8B
# --model_path OpenGVLab/InternVL3-8B
Details of Example 2
Q: Based on these two views showing the same scene: in which direction did I move from the first view to the second view? Directly left B. Directly right C. Diagonally forward and right D. Diagonally forward and left
|
|
GT: D
Test Multiple Questions in a Single Run
Prepare a file similar to examples/examples.jsonl, where each line represents a single question.
The model is loaded once and processes questions sequentially. The questions remain independent of each other.
For more details on the
jsonlformat, refer to the documentation for Single-Image Data and Multi-Image Data.
python example.py \
--jsonl_path examples/examples.jsonl \
--model_path sensenova/SenseNova-SI-1.1-InternVL3-8B
# --model_path OpenGVLab/InternVL3-8B
Evaluation
To reproduce the benchmark results above, please refer to EASI to evaluate SenseNova-SI on mainstream spatial intelligence benchmarks.
🖊️ Citation
@article{sensenova-si,
title = {Scaling Spatial Intelligence with Multimodal Foundation Models},
author = {Cai, Zhongang and Wang, Ruisi and Gu, Chenyang and Pu, Fanyi and Xu, Junxiang and Wang, Yubo and Yin, Wanqi and Yang, Zhitao and Wei, Chen and Sun, Qingping and Zhou, Tongxi and Li, Jiaqi and Pang, Hui En and Qian, Oscar and Wei, Yukun and Lin, Zhiqian and Shi, Xuanke and Deng, Kewang and Han, Xiaoyang and Chen, Zukai and Fan, Xiangyu and Deng, Hanming and Lu, Lewei and Pan, Liang and Li, Bo and Liu, Ziwei and Wang, Quan and Lin, Dahua and Yang, Lei},
journal = {arXiv preprint arXiv:2511.13719},
year = {2025}
}
- Downloads last month
- 249
Model tree for sensenova/SenseNova-SI-InternVL3-2B
Base model
OpenGVLab/InternVL3-2B-Pretrained