Meta-World

Meta-World is a well-designed, open-source simulation benchmark for multi-task and meta reinforcement learning in continuous-control robotic manipulation. It gives researchers a shared, realistic playground to test whether algorithms can learn many different tasks and generalize quickly to new ones β€” two central challenges for real-world robotics.

MetaWorld MT10 demo

Why Meta-World matters

What it enables in LeRobot

In LeRobot, you can evaluate any policy or vision-language-action (VLA) model on Meta-World tasks and get a clear success-rate measure. The integration is designed to be straightforward:

Quick start, train a SmolVLA policy on Meta-World

Example command to train a SmolVLA policy on a subset of tasks:

lerobot-train \
  --policy.type=smolvla \
  --policy.repo_id=${HF_USER}/metaworld-test \
  --policy.load_vlm_weights=true \
  --dataset.repo_id=lerobot/metaworld_mt50 \
  --env.type=metaworld \
  --env.task=assembly-v3,dial-turn-v3,handle-press-side-v3 \
  --output_dir=./outputs/ \
  --steps=100000 \
  --batch_size=4 \
  --eval.batch_size=1 \
  --eval.n_episodes=1 \
  --eval_freq=1000

Notes:

  pip install "gymnasium==1.1.0"

to ensure proper compatibility.

Quick start β€” evaluate a trained policy

To evaluate a trained policy on the Meta-World medium difficulty split:

lerobot-eval \
  --policy.path="your-policy-id" \
  --env.type=metaworld \
  --env.task=medium \
  --eval.batch_size=1 \
  --eval.n_episodes=2

This will run episodes and return per-task success rates using the standard Meta-World evaluation keys.

Practical tips

Update on GitHub