Meta-World is a well-designed, open-source simulation benchmark for multi-task and meta reinforcement learning in continuous-control robotic manipulation. It gives researchers a shared, realistic playground to test whether algorithms can learn many different tasks and generalize quickly to new ones β two central challenges for real-world robotics.

In LeRobot, you can evaluate any policy or vision-language-action (VLA) model on Meta-World tasks and get a clear success-rate measure. The integration is designed to be straightforward:
We provide a LeRobot-ready dataset for Meta-World (MT50) on the HF Hub: https://huggingface.co/datasets/lerobot/metaworld_mt50.
Task descriptions and the exact keys required for evaluation are available in the repo/dataset β use these to ensure your policy outputs the right success signals.
Example command to train a SmolVLA policy on a subset of tasks:
lerobot-train \
--policy.type=smolvla \
--policy.repo_id=${HF_USER}/metaworld-test \
--policy.load_vlm_weights=true \
--dataset.repo_id=lerobot/metaworld_mt50 \
--env.type=metaworld \
--env.task=assembly-v3,dial-turn-v3,handle-press-side-v3 \
--output_dir=./outputs/ \
--steps=100000 \
--batch_size=4 \
--eval.batch_size=1 \
--eval.n_episodes=1 \
--eval_freq=1000Notes:
--env.task accepts explicit task lists (comma separated) or difficulty groups (e.g., env.task="hard").batch_size, steps, and eval_freq to match your compute budget.AssertionError: ['human', 'rgb_array', 'depth_array'] when running MetaWorld environments, this comes from a mismatch between MetaWorld and your Gymnasium version.
We recommend using: pip install "gymnasium==1.1.0"to ensure proper compatibility.
To evaluate a trained policy on the Meta-World medium difficulty split:
lerobot-eval \
--policy.path="your-policy-id" \
--env.type=metaworld \
--env.task=medium \
--eval.batch_size=1 \
--eval.n_episodes=2This will run episodes and return per-task success rates using the standard Meta-World evaluation keys.
info["is_success"] keys when writing post-processing or logging so your success metrics line up with the benchmark.