Getting Started with ROBOTIS Lab
Overview
ROBOTIS Lab is a research-oriented repository based on Isaac Lab, designed to enable reinforcement learning and imitation learning experiments using Robotis robots in simulation. This project provides simulation environments, configuration tools, and task definitions tailored for Robotis hardware, leveraging NVIDIA Isaac Sim’s powerful GPU-accelerated physics engine and Isaac Lab’s modular RL pipeline.
INFO
This repository currently depends on IsaacLab v2.0.0 or higher.
Installation
Follow the Isaac Lab installation guide to set up the environment.
Instead of the recommended local installation, you can run Isaac Lab in a Docker container to simplify dependency management and ensure consistency across systems.Clone the Isaac Lab Repository:
git clone https://github.com/isaac-sim/IsaacLab.git
- Start and enter the Docker container:
# start
./IsaacLab/docker/container.py start base
# enter
./IsaacLab/docker/container.py enter base
- Clone the robotis_lab repository (outside the IsaacLab directory):
cd /workspace && git clone https://github.com/ROBOTIS-GIT/robotis_lab.git
- Install the robotis_lab Package.
cd robotis_lab
python -m pip install -e source/robotis_lab
- Verify that the extension is correctly installed by listing all available environments:
python scripts/tools/list_envs.py
Once the installation is complete, the available training tasks will be displayed as shown below:
Running Examples
Reinforcement Learning
You can train and run the FFW-BG2 Reach Task using the following commands:
# Train
python scripts/reinforcement_learning/skrl/train.py --task RobotisLab-Reach-FFW-BG2-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/skrl/play.py --task RobotisLab-Reach-FFW-BG2-v0 --num_envs=16