--- language: - en pretty_name: NYUv2 tags: - robotics license: mit task_categories: - depth-estimation - image-segmentation - image-feature-extraction dataset_info: features: - name: id dtype: string - name: rgb dtype: image - name: depth dtype: image - name: semantic dtype: image - name: instance dtype: image splits: - name: train num_bytes: 2237596040.926 num_examples: 1014 - name: val num_bytes: 639427936.0 num_examples: 290 - name: test num_bytes: 320361261.0 num_examples: 145 download_size: 1284403057 dataset_size: 3197385237.926 configs: - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* - split: test path: data/test-* --- # NYUv2 NYUv2 is a benchmark RGB-D dataset widely used for scene understanding tasks such as: - Indoor semantic segmentation - Depth estimation - Instance segmentation - **Homepage:** [NYU Depth Dataset V2](https://cs.nyu.edu/~fergus/datasets/nyu_depth_v2.html) - **Paper:** [Indoor Segmentation and Support Inference from RGBD Images](https://cs.nyu.edu/~fergus/datasets/indoor_seg_support.pdf) This version has been preprocessed to include aligned: - RGB images (`.png`) - Depth maps in millimeters (`.tiff`, `uint16`) - Semantic masks (`.tiff`, scaled `uint16`) - Instance masks (`.tiff`, scaled `uint16`) Each sample is annotated with a consistent `id` and split across train/val/test. ## ๐Ÿงพ Dataset Metadata Additional files included: - `camera_params.json` โ€” camera intrinsics and distortion - `class_names.json` โ€” mapping from class IDs to human-readable names - `scaling_factors.json` โ€” used for label/mask de-scaling during training ## ๐Ÿ“„ Citation If you use this dataset, please cite the original authors: ```bibtex @inproceedings{Silberman:ECCV12, author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus}, title = {Indoor Segmentation and Support Inference from RGBD Images}, booktitle = {ECCV}, year = {2012} }