The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
PhysicalAI-Robotics-GR00T-Teleop-G1-hand1
Unitree G1 Fruits Pick and Place Dataset - 1D Hand Control Adaptation
This dataset is an adapted version of the original NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1 dataset, modified for simplified 1-dimensional hand control instead of the original 7-dimensional finger joint control.
π Key Modifications
Original Dataset (43 DOF)
- Hand Control: 7 joints per hand (14 total)
- Left hand:
left_hand_index_0_joint,left_hand_index_1_joint,left_hand_middle_0_joint,left_hand_middle_1_joint,left_hand_thumb_0_joint,left_hand_thumb_1_joint,left_hand_thumb_2_joint - Right hand:
right_hand_index_0_joint,right_hand_index_1_joint,right_hand_middle_0_joint,right_hand_middle_1_joint,right_hand_thumb_0_joint,right_hand_thumb_1_joint,right_hand_thumb_2_joint
- Left hand:
- Total Action Dimensions: 43
Adapted Dataset (31 DOF)
- Hand Control: 1 gripper state per hand (2 total)
- Left hand:
left_hand_state(0.0 = open, 1.0 = closed) - Right hand:
right_hand_state(0.0 = open, 1.0 = closed)
- Left hand:
- Total Action Dimensions: 31 (43 - 14 + 2)
π§ Methodology
The conversion from 7D to 1D hand control was accomplished through machine learning regression:
Data Annotation Process
- Interactive Video Annotation Tool: Developed custom tool to manually annotate hand states
- Human Expert Labeling: 50-100 hand states labeled per dataset as open (0) or closed (1)
- Statistical Validation: Annotations validated against joint value distributions
Regression Model Training
- Input Features: 7-dimensional joint values for each hand
- Output Target: Single continuous value [0.0, 1.0] representing gripper state
- Models Tested: Linear Regression, Ridge Regression, Random Forest
- Best Model Selection: Automatic selection based on RΒ² score and MSE
- Training Data: Real annotated samples from original teleoperation trajectories
Dataset Conversion Pipeline
- Episode-by-Episode Processing: Each trajectory converted individually
- Regression Prediction: Trained model predicts 1D hand state from 7D joint values
- Fallback Algorithm: Simple mean-based conversion when model unavailable
- Data Validation: Extensive testing to ensure conversion quality
π Dataset Format
Modality Structure
{
"action": {
"left_leg": {"start": 0, "end": 6}, // 6 DOF
"right_leg": {"start": 6, "end": 12}, // 6 DOF
"waist": {"start": 12, "end": 15}, // 3 DOF
"left_arm": {"start": 15, "end": 22}, // 7 DOF
"right_arm": {"start": 22, "end": 29}, // 7 DOF
"left_hand": {"start": 29, "end": 30}, // 1 DOF (gripper state)
"right_hand": {"start": 30, "end": 31} // 1 DOF (gripper state)
}
}
Action Vector Composition
- Indices 0-28: Same as original (legs, waist, arms)
- Index 29:
left_hand_state- Continuous value [0.0, 1.0] - Index 30:
right_hand_state- Continuous value [0.0, 1.0]
Hand State Interpretation
- 0.0: Fully open hand/gripper
- 1.0: Fully closed hand/gripper
- 0.0-1.0: Intermediate gripper positions
π― Advantages of 1D Hand Control
Simplified Control
- Easier Learning: Reduced complexity for RL algorithms
- Faster Training: Less dimensional action space
- Better Generalization: Single gripper parameter easier to transfer
Practical Benefits
- Hardware Compatibility: Works with simple gripper mechanisms
- Real-time Control: Lower computational requirements
- Intuitive Interface: Direct mapping to gripper open/close commands
π Dataset Structure
PhysicalAI-Robotics-GR00T-Teleop-G1-hand1/
βββ g1-pick-apple/
β βββ data/chunk-000/ # Converted action data (31D)
β βββ meta/ # Updated metadata files
β βββ videos/ # Original videos (symlinked)
βββ g1-pick-grapes/
βββ g1-pick-pear/
βββ g1-pick-starfruit/
βββ README.md # This file
Files Modified
info.json: Updated action dimensions and joint namesmodality.json: Updated hand indices (22-28,36-42 β 29,30)stats.json: Recomputed statistics for 31D action spaceepisode_*.parquet: All trajectory data converted to 31D format
Files Preserved
tasks.jsonl: Task descriptions unchangedepisodes.jsonl: Episode metadata unchanged- Videos: Original MP4 files preserved via symbolic links
π Technical Implementation
Conversion Tools
A complete toolkit was developed for the conversion process:
hand_annotator.py: Interactive video annotation toolhand_regressor.py: ML model training and evaluationdataset_converter.py: Main conversion pipelinemeta_adapter.py: Metadata files adaptationvideo_player.py: Visualization tool for before/after comparisonview_parquet.py: Data analysis and validation tool
Quality Assurance
- Conversion Validation: Automated testing of action vector dimensions
- Statistical Analysis: Comparison of original vs converted data distributions
- Visual Inspection: Frame-by-frame video comparison tool
- Resume Capability: Conversion process can be resumed from interruption
π Original Dataset Information
Source: NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1
Original Specifications:
- 1000 teleoperation trajectories of Unitree G1 robot
- 4 fruit types: Apple, Pear, Starfruit, Grape
- RGB video: 640x480 resolution, 20fps
- Full body control: 43 DOF including detailed finger articulation
- Language instructions: Pick and place tasks with natural language prompts
License: Creative Commons Attribution 4.0 International License (CC BY 4.0)
ποΈ Institution
Developed at: Skoltech (Skolkovo Institute of Science and Technology)
Research Focus: Physical AI and Robotics Control Simplification
Contact: For questions about this adaptation, please contact the Skoltech robotics research team.
π Usage
This adapted dataset maintains full compatibility with the original LeRobot framework and training pipelines. Simply update your configuration to use 31-dimensional action vectors instead of 43-dimensional ones.
Quick Start
# Example: Loading the adapted dataset
import pandas as pd
# Load converted episode data
episode_data = pd.read_parquet("g1-pick-apple/data/chunk-000/episode_000000.parquet")
# Access 31D action vectors
action = episode_data.iloc[0]['action'] # Shape: (31,)
left_gripper = action[29] # Left hand state [0.0, 1.0]
right_gripper = action[30] # Right hand state [0.0, 1.0]
π Citation
If you use this adapted dataset in your research, please cite both the original NVIDIA dataset and this adaptation:
@dataset{nvidia_physicalai_g1_original,
title={PhysicalAI-Robotics-GR00T-Teleop-G1},
author={NVIDIA GEAR},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-Teleop-G1}
}
@dataset{physicalai_g1_hand1_adaptation,
title={PhysicalAI-Robotics-GR00T-Teleop-G1-hand1: 1D Hand Control Adaptation},
author={Artem Lykov, Vlad Kozlovsky},
year={2025},
institution={Skolkovo Institute of Science and Technology},
note={Adaptation of NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1 with simplified 1D hand control}
}
βοΈ License
This adapted dataset inherits the Creative Commons Attribution 4.0 International License (CC BY 4.0) from the original NVIDIA dataset. The adaptation tools and methodology are also released under the same open license.
π Version Information
- Original Dataset Version: NVIDIA PhysicalAI-Robotics-GR00T-Teleop-G1 (June 2025)
- Adaptation Version: 1.0
- Conversion Date: January 2025
- Tools Version: LeRobot-compatible conversion pipeline v1.0
This adaptation was created to facilitate research in simplified robotic manipulation and to provide an accessible entry point for researchers working with gripper-based robotic systems.
- Downloads last month
- 215