VSI-SUPER-Recall / README.md
EdwinHuang's picture
Update README.md
d3f3108 verified
---
license: apache-2.0
task_categories:
- visual-question-answering
tags:
- video
- spatial-intelligence
- recall
- benchmark
language:
- en
---
# VSI-SUPER-Recall
**[Website](https://vision-x-nyu.github.io/cambrian-s.github.io/)** | **[Paper](https://arxiv.org/abs/2025)** | **[GitHub](https://github.com/cambrian-mllm/cambrian-s)** | **[Models](https://huggingface.co/collections/nyu-visionx/cambrian-s-models)**
**Authors**: [Shusheng Yang*](https://github.com/vealocia), [Jihan Yang*](https://jihanyang.github.io/), [Pinzhi Huang†](https://pinzhihuang.github.io/), [Ellis Brown†](https://ellisbrown.github.io/), et al.
VSI-SUPER-Recall is a benchmark for testing long-horizon spatial observation and recall in arbitrarily long videos. It evaluates whether models can remember and recall the order in which unusual objects appeared across extended video sequences.
## Overview
VSI-SUPER-Recall challenges models to:
- Track object appearances across long videos (10-240 minutes)
- Recall the temporal order of inserted objects
- Maintain spatial memory over extended periods
This benchmark is part of [VSI-Super](https://huggingface.co/collections/nyu-visionx/vsi-super), which also includes [VSI-SUPER-Count](https://huggingface.co/datasets/nyu-visionx/VSI-SUPER-Count).
## Quick Start
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("nyu-visionx/VSI-SUPER-Recall", split="test")
# Access a sample
sample = dataset[0]
print(sample)
```
## Dataset Structure
Each sample contains:
```python
{
"video_path": "10mins/00000000.mp4",
"question": "These are frames of a video.\nWhich of the following correctly represents the order in which the Pikachu appeared in the video?",
"options": [
"A. Trash can, Bed, Chair, Basket",
"B. Trash can, Bed, Basket, Chair",
"C. Bed, Chair, Basket, Trash can",
"D. Bed, Chair, Trash can, Basket"
],
"answer": "A", # Correct option letter
"type": "10mins" # Video duration
}
```
**Key points:**
- 300 samples total (60 per video duration)
- Video durations: 10, 30, 60, 120, 240 minutes
- Videos downsampled to 1 frame per second
- Multiple choice format with 4 options
- Questions ask about the order of appearance of inserted objects
## Dataset Details
- **Total samples**: 300
- **Video durations**: 10mins (60), 30mins (60), 60mins (60), 120mins (60), 240mins (60)
- **Question format**: Multiple choice about object appearance order
- **Frame rate**: 1 FPS (downsampled)
## Citation
```bibtex
@article{yang2025cambrian,
title={Cambrian-S: Towards Spatial Supersensing in Video},
author={Yang, Shusheng and Yang, Jihan and Huang, Pinzhi and Brown, Ellis and Yang, Zihao and Yu, Yue and Tong, Shengbang and Zheng, Zihan and Xu, Yifan and Wang, Muhan and Lu, Danhao and Fergus, Rob and LeCun, Yann and Fei-Fei, Li and Xie, Saining},
journal={arXiv preprint arXiv:2511.04670},
year={2025}
}
```