Datasets:

Modalities:
Image
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
License:
Dataset Viewer (First 5GB)
Auto-converted to Parquet
json
listlengths
1
31
mp4
unknown
__key__
stringlengths
22
44
__url__
stringclasses
23 values
[ { "box": [ 155, 14, 464, 309 ], "frame": 6, "step_name": "flip pancake", "video": "1EXUX0MB_68" } ]
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAJjm9tZGF0IQqUnYalA2IgmChzIAlu+AmSKgNCi8uotgF(...TRUNCATED)
1EXUX0MB_68_flip_pancake_002
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[{"box":[285,93,480,343],"frame":4,"step_name":"add egg white","video":"1EXUX0MB_68"},{"box":[193,18(...TRUNCATED)
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQANJ01tZGF0IRqUpcqXBGKgwCCNVNeNQIu6WkiVcQSDJ7a(...TRUNCATED)
1EXUX0MB_68_add_egg_white_001
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[{"box":[183,157,405,400],"frame":1,"step_name":"whisk mixture","video":"1EXUX0MB_68"},{"box":[170,2(...TRUNCATED)
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAN5MxtZGF0IQqUdaIQxIIhTa050qrreVGXR0oWKsBxyBN(...TRUNCATED)
1EXUX0MB_68_whisk_mixture_002
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[{"box":[319,106,480,326],"frame":26,"step_name":"whisk mixture","video":"1EXUX0MB_68"},{"box":[191,(...TRUNCATED)
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQASjiltZGF0IQqUpZKQzEKw0GJ2m7mSsVqqQIIb0gRYBIZ(...TRUNCATED)
1EXUX0MB_68_whisk_mixture_003
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[{"box":[249,107,456,325],"frame":19,"step_name":"add egg white","video":"1EXUX0MB_68"},{"box":[231,(...TRUNCATED)
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAFOMttZGF0IQqUpZKQzEKw0GJ2m7mSsVqqQIIb0gRYBIZ(...TRUNCATED)
1EXUX0MB_68_add_egg_white_002
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[{"box":[231,0,433,233],"frame":1,"step_name":"add egg","video":"1EXUX0MB_68"},{"box":[238,0,441,172(...TRUNCATED)
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAHbm1tZGF0IQqUXY4gxVOKQECOb3iZYFqFmmwsBN8wmKH(...TRUNCATED)
1EXUX0MB_68_add_egg_001
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[ { "box": [ 35, 0, 315, 187 ], "frame": 13, "step_name": "flip pancake", "video": "1EXUX0MB_68" } ]
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAJQx5tZGF0IQqJZ7b3//8La5GrxOAhIB7jzgBEBYD4eAI(...TRUNCATED)
1EXUX0MB_68_flip_pancake_001
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[{"box":[221,219,437,485],"frame":17,"step_name":"whisk it","video":"1EXUX0MB_68"},{"box":[191,185,4(...TRUNCATED)
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAivPVtZGF0IRqUpcqXBGKgwCCNVNeNQIu6WkiVcQSDJ7a(...TRUNCATED)
1EXUX0MB_68_whisk_it_001
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[ { "box": [ 229, 0, 430, 352 ], "frame": 23, "step_name": "add vanilla", "video": "1EXUX0MB_68" } ]
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAQ5kNtZGF0IQqUdaIQxIIhTa050qrreVGXR0oWKsBxyBN(...TRUNCATED)
1EXUX0MB_68_add_vanilla_001
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
[{"box":[275,132,480,354],"frame":6,"step_name":"whisk mixture","video":"1EXUX0MB_68"},{"box":[177,2(...TRUNCATED)
"AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQATzF5tZGF0IQqUXY4gxVOKQECOb3iZYFqFmmwsBN8wmKH(...TRUNCATED)
1EXUX0MB_68_whisk_mixture_001
"hf://datasets/CVML-TueAI/grounding-YT-dataset@976398c810a25cb13dc41a0f0aa5430823d8ab47/clips/datase(...TRUNCATED)
End of preview. Expand in Data Studio

Grounding YouTube Dataset

What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions arxiv

This dataset is packed in WebDataset format.

The dataset is present in three styles:

  • Untrimmed videos + annotations within the entire video
  • Action clips extracted from the videos + annotations in each clip
  • Action frames extracted from the videos + annotation of the frame

Example usage for clips:

Also decoding raw binary video data and json

import webdataset as wds
from huggingface_hub import HfFileSystem, get_token, hf_hub_url
import json
import io
import torch
import av 
import numpy as np
from torch.utils.data import DataLoader

fs = HfFileSystem()
files = [fs.resolve_path(path) for path in fs.glob("hf://datasets/CVML-TueAI/grounding-YT-dataset/clips/*.tar")]
urls = [hf_hub_url(file.repo_id, file.path_in_repo, repo_type="dataset") for file in files]
urls = f"pipe: curl -s -L -H 'Authorization:Bearer {get_token()}' {'::'.join(urls)}"

def load_video(video_bytes):
    container = av.open(io.BytesIO(video_bytes))
    frames = []
    for frame in container.decode(video=0):
        img = frame.to_ndarray(format="rgb24")
        frames.append(img)
    video_tensor = torch.from_numpy(np.stack(frames))
    return video_tensor #[T, H, W, C]

def load_json(json_bytes):
    """Decode JSON metadata"""
    return json.loads(json_bytes.decode("utf-8"))

dataset = (
    wds.WebDataset(urls,)
    .shuffle(100)
    .to_tuple("mp4", "json")
    .map_tuple(load_video, load_json)
)

Evaluation - Pointwise accuracy:

For pointwise accuracy, a prediction is considered correct if the predicted point lies inside the annotated ground truth bounding box. In order to evaluate your predictions, see evaluation

Visualization:

Visualization contains scripts to generate frames with the ground truth box and the predicted point. One should follow the prediction json format given in random_preds.json files. Here are a few visualizations generated:

The red dot shows the predicted point. Prediction is None in case no action is predicted.

Citation Information

If you're using GroundingYouTube in your research or applications, please cite using this BibTeX:

@InProceedings{Chen_2024_CVPR,
    author    = {Chen, Brian and Shvetsova, Nina and Rouditchenko, Andrew and Kondermann, Daniel and Thomas, Samuel and Chang, Shih-Fu and Feris, Rogerio and Glass, James and Kuehne, Hilde},
    title     = {What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {18419-18429}
}
Downloads last month
191