Skip to content

AI WorkerFreedom from Work

Learn from humans, perform like experts.

AI Worker

Overview of the ROBOTIS Physical AI Lineup ​

ROBOTIS Physical AI Lineup

Progressing toward advanced Physical AI through a scalable research lineup

The ROBOTIS Physical AI Lineup consists of three scalable levels of research-focused robots:

  • Level 3 (Enterprise): AI Worker: Semi-humanoid robot systems
  • Level 2 (Middle): OMY: Advanced AI Manipulators
  • Level 1 (Entry): OMX: Cost-effective AI Manipulators [Coming Soon]

Each level supports a progressive research journey in Physical AI. It begins with basic motion learning and continues through full-body imitation to autonomous operation.

Lineup Breakdown ​

1. AI Worker Series (Enterprise Level) ​

  • Full-body semi-humanoid platform (19 DOF ~ 25 DOF robot body)
  • Supports bimanual manipulation
  • Designed for imitation learning and autonomous policy training
  • Compatible with ROS 2 and Physical AI Tools

2. OMY Series (Middle Level) ​

  • Collaborative robot style arms (6 DOF robot arm + gripper)
  • Gravity compensation, self-collision detection
  • Suitable for advanced control
  • Compatible with ROS 2 and Physical AI Tools

3. OMX Series (Entry Level) [Coming Soon] ​

  • Affordable, lightweight AI manipulators(5 DOF robot arm + gripper)
  • Ideal for educational and basic RL/IL experiments
  • Compatible with ROS 2 and Physical AI Tools

Our Vision for Physical AI ​

At ROBOTIS, our vision for Physical AI is to solve real-world industrial and societal problems that traditional, rule-based systems cannot. We believe that true intelligence emerges when robots learn from humans, adapt to dynamic environments, and perform safely and autonomously in the physical world. Through our scalable lineup from entry-level manipulators to full-body robots,

we aim to:

  1. Lower the barriers to real-world AI research
  2. Accelerate the development of intelligent machines that move, sense, and learn like humans
  3. Empower researchers to build robots that are not just smart in code, but capable in the real world

By embedding intelligence into physical systems, we take a step closer to a future where robots collaborate with people, extend human capabilities, and bring freedom from repetitive or dangerous labor.

What is Physical AI? ​

Physical AI refers to artificial intelligence that learns and acts through real-world physical interaction using robotic bodies.

Unlike traditional AI, which operates purely in simulation or digital environments, Physical AI:

  • Receives feedback through motion, contact, and force
  • Learns by interacting with the environment in real time
  • Uses robotic hardware to sense, move, and adapt

This approach enables learning that is grounded in reality — shaped by friction, gravity, uncertainty, and the complexity of the physical world.

Physical AI allows us to:

  • Train robots to perform real-world tasks with precision
  • Collect meaningful demonstrations through human guidance (teleoperation, VR)
  • Deploy policies that bridge simulation and reality

By embedding intelligence into physical systems, we open the door to AI that can not only understand the world, but also act in it safely and effectively.

AI Worker and AI Manipulator released under the Apache-2.0 license.