IO THE WORLD

IO-AI is an internationally leading provider of embodied intelligent robot foundational scene data and solutions, providing data and basic tools for various application scenarios through embodied data collection, empowering embodied intelligent large models.

Our data is continuously increasing

Collection
200TB
Scene
200+
Annotation
2 million
Duration
4000H
We recently released the General Purpose Teleoperation Platform. Learn more

Data Solutions

We offer comprehensive services from data collection, processing, and annotation to model training and deployment, enabling clients to achieve end-to-end data handling and model development.

Data Collection Hardware

Our in-house developed hardware supports the collection of various data types, from real human data to robot teleoperation data, with output in standard ROS format, ensuring high quality and accuracy.

Data Annotation Platform

Our platform focuses on delivering high-quality annotation services while ensuring data security and privacy. It supports seamless integration with major cloud storage platforms, ensuring data accessibility and protection.

Model Deployment Solutions

We offer flexible, integrated solutions based on client needs, providing services from data collection and annotation to model training, ensuring smooth integration and rapid project deployment.

Client Success Cases

OpenLoong RobotRelman IntelligentTi5 RoboticsDiscover Robotics

Contact Us

Phone

(+86) 0755-88658665

Company Address

Guangming, Shenzhen, CN

Hardware + Software Integration

General Purpose Teleoperation Platform

We have developed a system that controls robot movement through an action collection kit.

Low Cost
Adaptable to any robot configuration at a very low cost, supporting full-body remote operation of bipedal humanoid robots with dual arms and legs. It supports SDKs with multiple interfaces including Python, C++, ROS1, and ROS2.
Bilateral Control
Motion control from human to robot and perception feedback from robot to human, facilitating embodied model training based on real-time robot perception.
User-Friendly
An elegant interface presenting robot perception information, human-machine status, and data collection status right in front of you.
TeleoperationSpecialTeleoperationExamples
Software Platform

IO Platform - Data Annotation Management

Based on our efficient embodied intelligent collection equipment, combined with the data annotation platform, trained machines can accurately understand and execute a series of complex commands.

Multiple Perspective Playback
Smooth playback of visual images/depth images/3D humanoid movements on computers or tablets, efficiently completing annotations and checks, supporting multiple natural languages such as Chinese and English, easy to use.
Secondary Data Verification
Automatically extracts annotated data for visual/depth images/3D movements, using algorithms and manual checks to ensure data accuracy.
Ensure Data Security
Supports public cloud/private cloud deployment, strict multi-level user permission control, browsing and operation logs, ensuring data confidentiality and security.
MarkerDemoPosterPlayerDemoPoster
Hardware Device

IO Motion Capture - Data Collection Device

IO Motion Capture is designed to accurately and conveniently record human posture. It can capture visual, audio, tactile, and force information.

Precise and Lightweight
Captures angular velocity, acceleration, and movement posture of various body parts for detailed motion data analysis.
Universal Data
Data uses standard protocol ROS, allowing data to be mapped to other virtual avatars, such as VR characters or URDF 3D models played on the web.
Connect Teleoperation
Real-time remote control of robots in different spaces and times to perform the same movements through collected data.
DataCollectionDeviceDataCollectionGlovesDataCollectionHelmetDataCollectionShoes
Custom Data

Modality Embodied Data

Data collection scenarios have expanded from homes and offices to vertical scenes where robots are likely to be applied first: warehouses, restaurants, hotels, etc., to support truly universal robot strategies or applications in any vertical scenario.

Data Diversity
Multichannel ImagesFull Body Motion TrajectoryTactile FeedbackAudio
Natural Language Annotation
Each task group and each task has detailed natural language annotations that can be used for high-level task planning and low-level motion planning.
Participate in Open Source Program

In January 2024, provided open source data for the Open X-Embodiment dataset.

In June 2024, open sourced 500,000 rule actions and 500 hours of freely collected data, joining the waiting list.

DataExampleSceneDataExamplePickerDataExampleMoveBoxDataExamplePush