nyuv2 / README.md
jagennath-hari's picture
update README.md
006453f
|
raw
history blame
2.05 kB
metadata
language:
  - en
pretty_name: NYUv2
tags:
  - robotics
license: mit
task_categories:
  - depth-estimation
  - image-segmentation
  - image-feature-extraction
dataset_info:
  features:
    - name: id
      dtype: string
    - name: rgb
      dtype: image
    - name: depth
      dtype: image
    - name: semantic
      dtype: image
    - name: instance
      dtype: image
  splits:
    - name: train
      num_bytes: 2237596040.926
      num_examples: 1014
    - name: val
      num_bytes: 639427936
      num_examples: 290
    - name: test
      num_bytes: 320361261
      num_examples: 145
  download_size: 1284403057
  dataset_size: 3197385237.926
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
      - split: test
        path: data/test-*

NYUv2

NYUv2 is a benchmark RGB-D dataset widely used for scene understanding tasks such as:

This version has been preprocessed to include aligned:

  • RGB images (.png)
  • Depth maps in millimeters (.tiff, uint16)
  • Semantic masks (.tiff, scaled uint16)
  • Instance masks (.tiff, scaled uint16)

Each sample is annotated with a consistent id and split across train/val/test.

🧾 Dataset Metadata

Additional files included:

  • camera_params.json — camera intrinsics and distortion
  • class_names.json — mapping from class IDs to human-readable names
  • scaling_factors.json — used for label/mask de-scaling during training

📄 Citation

If you use this dataset, please cite the original authors:

@inproceedings{Silberman:ECCV12,
  author    = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
  title     = {Indoor Segmentation and Support Inference from RGBD Images},
  booktitle = {ECCV},
  year      = {2012}
}