The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

PhysicalAI-Robotics-GR00T-Teleop-G1-hand1

Unitree G1 Fruits Pick and Place Dataset - 1D Hand Control Adaptation

This dataset is an adapted version of the original NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1 dataset, modified for simplified 1-dimensional hand control instead of the original 7-dimensional finger joint control.


πŸ”„ Key Modifications

Original Dataset (43 DOF)

  • Hand Control: 7 joints per hand (14 total)
    • Left hand: left_hand_index_0_joint, left_hand_index_1_joint, left_hand_middle_0_joint, left_hand_middle_1_joint, left_hand_thumb_0_joint, left_hand_thumb_1_joint, left_hand_thumb_2_joint
    • Right hand: right_hand_index_0_joint, right_hand_index_1_joint, right_hand_middle_0_joint, right_hand_middle_1_joint, right_hand_thumb_0_joint, right_hand_thumb_1_joint, right_hand_thumb_2_joint
  • Total Action Dimensions: 43

Adapted Dataset (31 DOF)

  • Hand Control: 1 gripper state per hand (2 total)
    • Left hand: left_hand_state (0.0 = open, 1.0 = closed)
    • Right hand: right_hand_state (0.0 = open, 1.0 = closed)
  • Total Action Dimensions: 31 (43 - 14 + 2)

🧠 Methodology

The conversion from 7D to 1D hand control was accomplished through machine learning regression:

Data Annotation Process

  1. Interactive Video Annotation Tool: Developed custom tool to manually annotate hand states
  2. Human Expert Labeling: 50-100 hand states labeled per dataset as open (0) or closed (1)
  3. Statistical Validation: Annotations validated against joint value distributions

Regression Model Training

  • Input Features: 7-dimensional joint values for each hand
  • Output Target: Single continuous value [0.0, 1.0] representing gripper state
  • Models Tested: Linear Regression, Ridge Regression, Random Forest
  • Best Model Selection: Automatic selection based on RΒ² score and MSE
  • Training Data: Real annotated samples from original teleoperation trajectories

Dataset Conversion Pipeline

  1. Episode-by-Episode Processing: Each trajectory converted individually
  2. Regression Prediction: Trained model predicts 1D hand state from 7D joint values
  3. Fallback Algorithm: Simple mean-based conversion when model unavailable
  4. Data Validation: Extensive testing to ensure conversion quality

πŸ“Š Dataset Format

Modality Structure

{
  "action": {
    "left_leg": {"start": 0, "end": 6},      // 6 DOF
    "right_leg": {"start": 6, "end": 12},    // 6 DOF  
    "waist": {"start": 12, "end": 15},       // 3 DOF
    "left_arm": {"start": 15, "end": 22},    // 7 DOF
    "right_arm": {"start": 22, "end": 29},   // 7 DOF
    "left_hand": {"start": 29, "end": 30},   // 1 DOF (gripper state)
    "right_hand": {"start": 30, "end": 31}   // 1 DOF (gripper state)
  }
}

Action Vector Composition

  • Indices 0-28: Same as original (legs, waist, arms)
  • Index 29: left_hand_state - Continuous value [0.0, 1.0]
  • Index 30: right_hand_state - Continuous value [0.0, 1.0]

Hand State Interpretation

  • 0.0: Fully open hand/gripper
  • 1.0: Fully closed hand/gripper
  • 0.0-1.0: Intermediate gripper positions

🎯 Advantages of 1D Hand Control

Simplified Control

  • Easier Learning: Reduced complexity for RL algorithms
  • Faster Training: Less dimensional action space
  • Better Generalization: Single gripper parameter easier to transfer

Practical Benefits

  • Hardware Compatibility: Works with simple gripper mechanisms
  • Real-time Control: Lower computational requirements
  • Intuitive Interface: Direct mapping to gripper open/close commands

πŸ“ Dataset Structure

PhysicalAI-Robotics-GR00T-Teleop-G1-hand1/
β”œβ”€β”€ g1-pick-apple/
β”‚   β”œβ”€β”€ data/chunk-000/          # Converted action data (31D)
β”‚   β”œβ”€β”€ meta/                    # Updated metadata files
β”‚   └── videos/                  # Original videos (symlinked)
β”œβ”€β”€ g1-pick-grapes/
β”œβ”€β”€ g1-pick-pear/
β”œβ”€β”€ g1-pick-starfruit/
└── README.md                    # This file

Files Modified

  • info.json: Updated action dimensions and joint names
  • modality.json: Updated hand indices (22-28,36-42 β†’ 29,30)
  • stats.json: Recomputed statistics for 31D action space
  • episode_*.parquet: All trajectory data converted to 31D format

Files Preserved

  • tasks.jsonl: Task descriptions unchanged
  • episodes.jsonl: Episode metadata unchanged
  • Videos: Original MP4 files preserved via symbolic links

πŸ›  Technical Implementation

Conversion Tools

A complete toolkit was developed for the conversion process:

  1. hand_annotator.py: Interactive video annotation tool
  2. hand_regressor.py: ML model training and evaluation
  3. dataset_converter.py: Main conversion pipeline
  4. meta_adapter.py: Metadata files adaptation
  5. video_player.py: Visualization tool for before/after comparison
  6. view_parquet.py: Data analysis and validation tool

Quality Assurance

  • Conversion Validation: Automated testing of action vector dimensions
  • Statistical Analysis: Comparison of original vs converted data distributions
  • Visual Inspection: Frame-by-frame video comparison tool
  • Resume Capability: Conversion process can be resumed from interruption

πŸ“š Original Dataset Information

Source: NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1

Original Specifications:

  • 1000 teleoperation trajectories of Unitree G1 robot
  • 4 fruit types: Apple, Pear, Starfruit, Grape
  • RGB video: 640x480 resolution, 20fps
  • Full body control: 43 DOF including detailed finger articulation
  • Language instructions: Pick and place tasks with natural language prompts

License: Creative Commons Attribution 4.0 International License (CC BY 4.0)


πŸ›οΈ Institution

Developed at: Skoltech (Skolkovo Institute of Science and Technology)

Research Focus: Physical AI and Robotics Control Simplification

Contact: For questions about this adaptation, please contact the Skoltech robotics research team.


πŸš€ Usage

This adapted dataset maintains full compatibility with the original LeRobot framework and training pipelines. Simply update your configuration to use 31-dimensional action vectors instead of 43-dimensional ones.

Quick Start

# Example: Loading the adapted dataset
import pandas as pd

# Load converted episode data
episode_data = pd.read_parquet("g1-pick-apple/data/chunk-000/episode_000000.parquet")

# Access 31D action vectors
action = episode_data.iloc[0]['action']  # Shape: (31,)
left_gripper = action[29]   # Left hand state [0.0, 1.0]
right_gripper = action[30]  # Right hand state [0.0, 1.0]

πŸ“– Citation

If you use this adapted dataset in your research, please cite both the original NVIDIA dataset and this adaptation:

@dataset{nvidia_physicalai_g1_original,
  title={PhysicalAI-Robotics-GR00T-Teleop-G1},
  author={NVIDIA GEAR},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-Teleop-G1}
}

@dataset{physicalai_g1_hand1_adaptation,
  title={PhysicalAI-Robotics-GR00T-Teleop-G1-hand1: 1D Hand Control Adaptation},
  author={Artem Lykov, Vlad Kozlovsky},
  year={2025},
  institution={Skolkovo Institute of Science and Technology},
  note={Adaptation of NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1 with simplified 1D hand control}
}

βš–οΈ License

This adapted dataset inherits the Creative Commons Attribution 4.0 International License (CC BY 4.0) from the original NVIDIA dataset. The adaptation tools and methodology are also released under the same open license.


πŸ” Version Information

  • Original Dataset Version: NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1 (June 2025)
  • Adaptation Version: 1.0
  • Conversion Date: January 2025
  • Tools Version: LeRobot-compatible conversion pipeline v1.0

This adaptation was created to facilitate research in simplified robotic manipulation and to provide an accessible entry point for researchers working with gripper-based robotic systems.

Downloads last month
215