Skip to content

Imitation Learning

Overview

This document provides an overview of the complete ROS 2-based imitation learning pipeline built on the OMX and the Hugging Face Hub. OMX offers two powerful approaches for imitation learning:

1. Data Collection

Human operators use a leader device to demonstrate motions, collecting image and joint position data. The collected data can be uploaded to and downloaded from the Hugging Face Hub.

2. Data Visualization

Collected data is visualized to inspect motion trajectories and images, helping to identify potential errors prior to training.

3. Model Training

The verified dataset is then used to train an action policy model. Training can be performed on local GPUs or on embedded platforms such as the NVIDIA Jetson. The resulting model can be uploaded to and downloaded from the Hugging Face Hub.

4. Model Inference

Once trained, the models are deployed on the OMX to execute real-time inference for tasks such as picking, placing

End-to-End Imitation Learning Workflow

  • The diagram below shows the full imitation learning workflow using the OMX and Hugging Face.
Imitation Learning Workflow

Dataset Schema

The dataset follows the standard 🤗 Hugging Face datasets format and contains imitation learning demonstrations collected from the OMX via ROS 2 teleoperation using the LeRobot framework.

FieldTypeDescription
actionList[float32]Leader state vector
observation.stateList[float32]Follower state vector
observation.images.camera1ImageRGB image from the first wrist camera
observation.images.camera2ImageRGB image from the second wrist camera
timestampfloat32Time (in seconds) when the step was recorded
frame_indexint64Index of the frame within an episode
episode_indexint64Index of the episode
indexint64Global index across the dataset
task_indexint64Task identifier

AI Worker and AI Manipulator released under the Apache-2.0 license.