metadata
language:
- en
pretty_name: NYUv2
tags:
- robotics
license: mit
task_categories:
- depth-estimation
- image-segmentation
- image-feature-extraction
dataset_info:
features:
- name: id
dtype: string
- name: rgb
dtype: image
- name: depth
dtype: image
- name: semantic
dtype: image
- name: instance
dtype: image
splits:
- name: train
num_bytes: 2237596040.926
num_examples: 1014
- name: val
num_bytes: 639427936
num_examples: 290
- name: test
num_bytes: 320361261
num_examples: 145
download_size: 1284403057
dataset_size: 3197385237.926
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
NYUv2
NYUv2 is a benchmark RGB-D dataset widely used for scene understanding tasks such as:
Indoor semantic segmentation
Depth estimation
Instance segmentation
Homepage: NYU Depth Dataset V2
Paper: Indoor Segmentation and Support Inference from RGBD Images
This version has been preprocessed to include aligned:
- RGB images (
.png) - Depth maps in millimeters (
.tiff,uint16) - Semantic masks (
.tiff, scaleduint16) - Instance masks (
.tiff, scaleduint16)
Each sample is annotated with a consistent id and split across train/val/test.
🧾 Dataset Metadata
Additional files included:
camera_params.json— camera intrinsics and distortionclass_names.json— mapping from class IDs to human-readable namesscaling_factors.json— used for label/mask de-scaling during training
📄 Citation
If you use this dataset, please cite the original authors:
@inproceedings{Silberman:ECCV12,
author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
title = {Indoor Segmentation and Support Inference from RGBD Images},
booktitle = {ECCV},
year = {2012}
}