RoboChallengeAI commited on
Commit
3bad7cd
Β·
verified Β·
1 Parent(s): 5933bc1

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +85 -15
README.md CHANGED
@@ -1,11 +1,9 @@
1
 
2
  # RoboChallenge Dataset
 
 
3
 
4
- ## Dataset Structure
5
  ### Available Tasks
6
-
7
- The dataset includes 30 diverse manipulation tasks (Table30):
8
-
9
  - `arrange_flowers`
10
  - `arrange_fruits_in_basket`
11
  - `arrange_paper_cups`
@@ -36,7 +34,14 @@ The dataset includes 30 diverse manipulation tasks (Table30):
36
  - `turn_on_light_switch`
37
  - `water_potted_plant`
38
  - `wipe_the_table`
 
 
 
 
 
 
39
 
 
40
  ### Hierarchy
41
  The dataset is organized by tasks, with each task containing multiple demonstration episodes:
42
  ```
@@ -50,17 +55,34 @@ The dataset is organized by tasks, with each task containing multiple demonstrat
50
  β”‚ β”‚ β”œβ”€β”€ meta/
51
  β”‚ β”‚ β”‚ └── episode_meta.json # Episode metadata
52
  β”‚ β”‚ β”œβ”€β”€ states/
53
- β”‚ β”‚ β”‚ └── states.jsonl # Robot states
 
 
 
 
54
  β”‚ β”‚ └── videos/
55
- β”‚ β”‚ β”œβ”€β”€ arm_realsense_rgb.mp4 # Arm-mounted camera
56
- β”‚ β”‚ β”œβ”€β”€ global_realsense_rgb.mp4 # Global view camera
57
- β”‚ β”‚ └── right_realsense_rgb.mp4 # Right-side camera (BEV)
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  β”‚ β”œβ”€β”€ episode_000001/
59
  β”‚ └── ...
60
  β”œβ”€β”€ convert_to_lerobot.py # Conversion script
61
  └── README.md
62
  ```
63
- ### JSON File Format
64
  `task_info.json`
65
  ```json
66
  {
@@ -96,20 +118,68 @@ The dataset is organized by tasks, with each task containing multiple demonstrat
96
  }
97
  ```
98
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
- ## Convert to Lerobot
101
  While you can implement a custom Dataset class to read RoboChallenge data directly, **we strongly recommend converting to LeRobot format** to take advantage of [LeRobot](https://github.com/huggingface/lerobot)'s comprehensive data processing and loading utilities.
102
- The `convert_to_lerobot.py` script we provided generates a ready-to-use [LeRobot](https://github.com/huggingface/lerobot) dataset repository from RoboChallenge dataset.
 
103
 
104
  ### Prerequisites
105
  - Python 3.9+ with the following packages:
106
- - `lerobot`
107
  - `opencv-python`
108
  - `numpy`
109
  - Configure `$LEROBOT_HOME` (defaults to `~/.lerobot` if unset).
110
 
111
  ```bash
112
- pip install lerobot opencv-python numpy
113
  export LEROBOT_HOME="/path/to/lerobot_home"
114
  ```
115
 
@@ -125,5 +195,5 @@ python convert_to_lerobot.py \
125
  ```
126
 
127
  ### Output
128
- - Frames and metadata are saved to $LEROBOT_HOME/<repo-name>.
129
- - At the end, the script calls dataset.consolidate(run_compute_stats=False). If you require aggregated statistics, run it with run_compute_stats=True or execute a separate stats job.
 
1
 
2
  # RoboChallenge Dataset
3
+ ## Tasks and Embodiments
4
+ The dataset includes 30 diverse manipulation tasks (Table30) across 4 embodiments:
5
 
 
6
  ### Available Tasks
 
 
 
7
  - `arrange_flowers`
8
  - `arrange_fruits_in_basket`
9
  - `arrange_paper_cups`
 
34
  - `turn_on_light_switch`
35
  - `water_potted_plant`
36
  - `wipe_the_table`
37
+ ### Embodiments
38
+ - **ARX5** - Single-arm with triple camera setup (wrist + global + right-side views)
39
+ - **UR5** - Single-arm with dual camera setup (wrist + global views)
40
+ - **FRANKA** - Single-arm with triple perspective setup (wrist + main + side views)
41
+ - **ALOHA** - Dual-arm with triple wrist camera setup (left wrist + right wrist + global views)
42
+
43
 
44
+ ## Dataset Structure
45
  ### Hierarchy
46
  The dataset is organized by tasks, with each task containing multiple demonstration episodes:
47
  ```
 
55
  β”‚ β”‚ β”œβ”€β”€ meta/
56
  β”‚ β”‚ β”‚ └── episode_meta.json # Episode metadata
57
  β”‚ β”‚ β”œβ”€β”€ states/
58
+ β”‚ β”‚ β”‚ # for single-arm (ARX5, UR5, Franka)
59
+ β”‚ β”‚ β”‚ β”œβ”€β”€ states.jsonl # Single-arm robot states
60
+ β”‚ β”‚ β”‚ # for dual-arm (ALOHA)
61
+ β”‚ β”‚ β”‚ β”œβ”€β”€ left_states.jsonl # Left arm states
62
+ β”‚ β”‚ β”‚ └── right_states.jsonl # Right arm states
63
  β”‚ β”‚ └── videos/
64
+ β”‚ β”‚ # Video configurations vary by robot model:
65
+ β”‚ β”‚ # ARX5
66
+ β”‚ β”‚ β”œβ”€β”€ arm_realsense_rgb.mp4 # Wrist view
67
+ β”‚ β”‚ β”œβ”€β”€ global_realsense_rgb.mp4 # Global view
68
+ β”‚ β”‚ └── right_realsense_rgb.mp4 # Side view
69
+ β”‚ β”‚ # UR5
70
+ β”‚ β”‚ β”œβ”€β”€ global_realsense_rgb.mp4 # Global view
71
+ β”‚ β”‚ └── handeye_realsense_rgb.mp4 # Wrist view
72
+ β”‚ β”‚ # Franka
73
+ β”‚ β”‚ β”œβ”€β”€ handeye_realsense_rgb.mp4 # Wrist view
74
+ β”‚ β”‚ β”œβ”€β”€ main_realsense_rgb.mp4 # Global view
75
+ β”‚ β”‚ └── side_realsense_rgb.mp4 # Side view
76
+ β”‚ β”‚ # ALOHA
77
+ β”‚ β”‚ β”œβ”€β”€ cam_high_rgb.mp4 # Global view
78
+ β”‚ β”‚ β”œβ”€β”€ cam_wrist_left_rgb.mp4 # Left wrist view
79
+ β”‚ β”‚ └── cam_wrist_right_rgb.mp4 # Right wrist view
80
  β”‚ β”œβ”€β”€ episode_000001/
81
  β”‚ └── ...
82
  β”œβ”€β”€ convert_to_lerobot.py # Conversion script
83
  └── README.md
84
  ```
85
+ ### Metadata Schema
86
  `task_info.json`
87
  ```json
88
  {
 
118
  }
119
  ```
120
 
121
+ ### Robot States Schema
122
+ Each episode contains states data stored in JSONL format. Depending on the embodiment, the structure differs slightly:
123
+ - **Single-arm robots (ARX5, UR5, Franka)** β†’ `states.jsonl`
124
+ - **Dual-arm robots (ALOHA)** β†’ `left_states.jsonl` and `right_states.jsonl`
125
+
126
+ Each file records the robot’s proprioceptive signals per frame, including joint angles,
127
+ end-effector poses, gripper states, and timestamps. The exact field definitions and coordinate conventions vary by platform,
128
+ as summarized below.
129
+
130
+ #### ARX5
131
+ | Data Name | Data Key |Shape | Semantics |
132
+ |:---------:|:-----:|:----:|:----:|
133
+ | Joint control |joint_positions | (6,) | Joint angle (in radians) from the base to the end effector. |
134
+ | Pose control | ee_positions | (6,) | End effector pose (tx, ty, tz, roll, pitch, yaw), where (roll, pitch, yaw) is relative euler angles from the arm base coordinate. X : back to front; Y: right to left; Z: down to up. |
135
+ | Gripper control |gripper | (1,) | Actual gripper width measurement in meter. |
136
+ | Time stamp |timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |
137
+
138
+ #### UR5
139
+ | Data Name | Data Key |Shape | Semantics |
140
+ |:---------:|:-----:|:----:|:----:|
141
+ | Joint control |joint_positions | (6,) | Joint angle (in radians) from the base to the end effector. |
142
+ | Pose control | ee_positions | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : front to back; Y: left to right; Z: down to up. |
143
+ | Gripper control |gripper | (1,) | Gripper closing angle, 0 for fully open, 255 for fully closed. |
144
+ | Time stamp |timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |
145
+
146
+ #### Franka
147
+ | Data Name | Data Key |Shape | Semantics |
148
+ |:---------:|:-----:|:----:|:----:|
149
+ | Joint control |joint_positions | (7,) | Joint angle (in radians) from the base to the end effector. |
150
+ | Pose control | ee_positions | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : back to front; Y: right to left; Z: down to up. |
151
+ | Gripper control |gripper | (2,) | Gripper trigger signals in the (close_button, open_button) order. |
152
+ | Gripper width |gripper_width | (1,) | Actual gripper width measurement |
153
+ | Time stamp |timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |
154
+
155
+
156
+ #### ALOHA
157
+ | Data Name | Data Key |Shape | Semantics |
158
+ |:---------:|:-----:|:----:|:----:|
159
+ | Master joint control |joint_positions | (6,) | Maste joint angle (in radians) from the base to the end effector. |
160
+ |Joint velocity| joint_vel | (7,) | Speed of 6 joint and gripper |
161
+ | Puppet joint control |qpos | (6,) | Puppet joint angle (in radians) from the base to the end effector. |
162
+ | Puppet pose control | ee_pose_quaternion | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : back to front; Y: right to left ; Z: down to up. |
163
+ | Puppet pose control | ee_pose_rpy | (6,) | End effector pose (tx, ty, tz, rr, rp, ry), where (tx, ty, tz) is relative position from the arm base coordinate , (rr, rp, ry) is euler (in radians). X : back to front; Y: right to left ; Z: down to up. |
164
+ | Gripper control |gripper | (1,) | Actual gripper width measurement in meter.|
165
+ | Time stamp |timestamp | (1,) | Floating point timestamp (in mileseconds) of each frame. |
166
+
167
+
168
+ ## Convert to LeRobot
169
 
 
170
  While you can implement a custom Dataset class to read RoboChallenge data directly, **we strongly recommend converting to LeRobot format** to take advantage of [LeRobot](https://github.com/huggingface/lerobot)'s comprehensive data processing and loading utilities.
171
+
172
+ The example script **`convert_to_lerobot.py`** converts **ARX5** data to the LeRobot dataset as a example. For other robot embodiments (UR5, Franka, ALOHA), you can adapt the script accordingly.
173
 
174
  ### Prerequisites
175
  - Python 3.9+ with the following packages:
176
+ - `lerobot==0.1.0`
177
  - `opencv-python`
178
  - `numpy`
179
  - Configure `$LEROBOT_HOME` (defaults to `~/.lerobot` if unset).
180
 
181
  ```bash
182
+ pip install lerobot==0.1.0 opencv-python numpy
183
  export LEROBOT_HOME="/path/to/lerobot_home"
184
  ```
185
 
 
195
  ```
196
 
197
  ### Output
198
+ - Frames and metadata are saved to `$LEROBOT_HOME/<repo-name>`.
199
+ - At the end, the script calls `dataset.consolidate(run_compute_stats=False)`. If you require aggregated statistics, run it with `run_compute_stats=True` or execute a separate stats job.