This guide shows you how to train policies on multiple GPUs using Hugging Face Accelerate.
First, ensure you have accelerate installed:
pip install accelerate
You can launch training in two ways:
You can specify all parameters directly in the command without running accelerate config:
accelerate launch \
--multi_gpu \
--num_processes=2 \
$(which lerobot-train) \
--dataset.repo_id=${HF_USER}/my_dataset \
--policy.type=act \
--policy.repo_id=${HF_USER}/my_trained_policy \
--output_dir=outputs/train/act_multi_gpu \
--job_name=act_multi_gpu \
--wandb.enable=trueKey accelerate parameters:
--multi_gpu: Enable multi-GPU training--num_processes=2: Number of GPUs to use--mixed_precision=fp16: Use fp16 mixed precision (or bf16 if supported)If you prefer to save your configuration, you can optionally configure accelerate for your hardware setup by running:
accelerate config
This interactive setup will ask you questions about your training environment (number of GPUs, mixed precision settings, etc.) and saves the configuration for future use. For a simple multi-GPU setup on a single machine, you can use these recommended settings:
Then launch training with:
accelerate launch $(which lerobot-train) \
--dataset.repo_id=${HF_USER}/my_dataset \
--policy.type=act \
--policy.repo_id=${HF_USER}/my_trained_policy \
--output_dir=outputs/train/act_multi_gpu \
--job_name=act_multi_gpu \
--wandb.enable=trueWhen you launch training with accelerate:
Important: LeRobot does NOT automatically scale learning rates or training steps based on the number of GPUs. This gives you full control over your training hyperparameters.
Many distributed training frameworks automatically scale the learning rate by the number of GPUs (e.g., lr = base_lr × num_gpus).
However, LeRobot keeps the learning rate exactly as you specify it.
If you want to scale your hyperparameters when using multiple GPUs, you should do it manually:
Learning Rate Scaling:
# Example: 2 GPUs with linear LR scaling
# Base LR: 1e-4, with 2 GPUs -> 2e-4
accelerate launch --num_processes=2 $(which lerobot-train) \
--optimizer.lr=2e-4 \
--dataset.repo_id=lerobot/pusht \
--policy=actTraining Steps Scaling:
Since the effective batch size bs increases with multiple GPUs (batch_size × num_gpus), you may want to reduce the number of training steps proportionally:
# Example: 2 GPUs with effective batch size 2x larger
# Original: batch_size=8, steps=100000
# With 2 GPUs: batch_size=8 (16 in total), steps=50000
accelerate launch --num_processes=2 $(which lerobot-train) \
--batch_size=8 \
--steps=50000 \
--dataset.repo_id=lerobot/pusht \
--policy=act--policy.use_amp flag in lerobot-train is only used when not running with accelerate. When using accelerate, mixed precision is controlled by accelerate’s configuration.batch_size × num_gpus. If you use 4 GPUs with --batch_size=8, your effective batch size is 32.step_scheduler_with_optimizer=False to prevent accelerate from adjusting scheduler steps based on the number of processes.For more advanced configurations and troubleshooting, see the Accelerate documentation. If you want to learn more about how to train on a large number of GPUs, checkout this awesome guide: Ultrascale Playbook.
Update on GitHub