# isaac-unitree-lab **Repository Path**: jayin/isaac-unitree-lab ## Basic Information - **Project Name**: isaac-unitree-lab - **Description**: https://github.com/erickun0125/isaac-unitree-lab - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: feat/gr00t - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-09-11 - **Last Updated**: 2025-09-11 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Isaac-Unitree-Lab: GR00T VLA Pipeline for Unitree G1 [![IsaacSim](https://img.shields.io/badge/IsaacSim-4.5.0-silver.svg)](https://docs.isaacsim.omniverse.nvidia.com/latest/index.html) [![Python](https://img.shields.io/badge/python-3.10-blue.svg)](https://docs.python.org/3/whatsnew/3.10.html) [![License](https://img.shields.io/badge/license-Apache--2.0-yellow.svg)](https://opensource.org/licenses/Apache-2.0) An Isaac Lab-based implementation for **data collection** and **GR00T inference** components of the NVIDIA GR00T VLA pipeline for Unitree G1 humanoid robots. This repository extends `unitree_sim_isaaclab` to provide essential tools for collecting high-quality demonstration data and deploying trained GR00T policies to both simulation and real-world Unitree G1 robots. ## Pipeline Integration Overview This repository serves as a key component in the complete GR00T VLA pipeline: **Full Pipeline Architecture:** 1. **Data Collection** ← **[This Repository]** - Custom Isaac Lab environments with multiple control interfaces 2. **Data Conversion** ← [External Repository] - Transform simulation data to GR00T training format 3. **GR00T Training** ← [External Repository] - Fine-tune GR00T VLA models on collected datasets 4. **GR00T Inference** ← **[This Repository]** - Deploy trained policies to Unitree G1 robots ## Key Features ### GR00T Action Provider (Core Component) Action provider that integrates NVIDIA GR00T VLA models with Unitree G1 simulation: - **Queue System for Action Chunk**: 16-step action horizon with intelligent caching for optimal inference performance - **IsaacLab to GR00T Joint Configuration Mapping**: Conversion between GR00T's 28-joint space and G1's 43-joint configuration - **Real-time Optimization**: 4-step denoising process for responsive policy execution - **Robust Error Handling**: Comprehensive validation and debugging capabilities for production deployment ### Data Collection Action Providers - **IKControllerActionProvider**: Differential IK-based precise control for generating high-quality demonstrations - **KeyboardActionProvider**: Real-time keyboard control interface for intuitive teleoperation ### Custom Isaac Lab Environments - **Touch Red Block**: Single object manipulation task for collecting basic manipulation demonstrations - **Touch Red or Blue Block**: Multi-object selective manipulation for collecting advanced reasoning data ## Architecture ``` Isaac-Unitree-Lab/ ├── action_provider/ │ ├── action_provider_gr00t.py # GR00T VLA model integration (CORE) │ ├── action_provider_IK_controller.py # Differential IK controller │ └── action_provider_keyboard.py # Keyboard control interface ├── tasks/g1_tasks/ │ ├── touch_redblock_g1_29dof_dex3/ # Red block touch environment │ └── touch_red_or_blue_block_g1_29dof_dex3/ # Multi-block selection environment └── robots/unitree/ # G1 robot configurations ``` ## Usage ### Repository Components #### 1. Data Collection (Provided) ```bash # Collect data using IK controller python sim_main.py \ --device cpu \ --enable_cameras \ --task Isaac-Touch-RedBlock-G129-Dex3-Joint \ --action_source ik_controller \ --enable_dex3_dds \ --robot_type g129 \ --enable_episode_writer \ --generate_data_dir ./g1_sim_data_red \ --enable_video \ --data_collection_task "Touch the red block with right hand" ``` #### 2. Data Conversion & GR00T Training (External) *These steps are handled by external repositories in the complete pipeline:* - Data conversion: Transform collected Isaac Lab data to GR00T training format - Model training: Fine-tune GR00T VLA models on the converted datasets - Checkpoint generation: Produce trained policy checkpoints for deployment #### 3. GR00T Policy Deployment (Provided) ```bash # Set GR00T checkpoint path CHECKPOINT_PATH="/path/to/fine-tuned/checkpoint-15000" # Deploy to simulation - Red block task python sim_main.py \ --device cuda \ --enable_cameras \ --task Isaac-Touch-RedOrBlueBlock-G129-Dex3-Joint \ --action_source policy \ --enable_dex3_dds \ --robot_type g129 \ --checkpoint_path CHECKPOINT_PATH \ --policy_device cuda \ --task_description "Touch the blue block with right hand" ``` ## Environment Configuration ### Touch Red Block Environment - **Objective**: Touch and move the red block to demonstrate basic manipulation skills - **Success Criteria**: Block displacement 0.01m from initial position - **Observations**: 29DOF joint states + DEX3 finger states + camera images ### Touch Red or Blue Block Environment - **Objective**: Selective manipulation of specified colored blocks for advanced reasoning - **Target Selection**: `target_block` parameter specifies "red" or "blue" - **Success Criteria**: Only the selected block moves > 0.01m - **Dynamic Placement**: Random positioning of red/blue blocks for variability ## Citation If you use this work in your research, please cite: ```bibtex @misc{isaac_rl_quad_humanoid, title={Isaac Lab Unitree Environments for G1}, author={[Kyungseo Park]}, year={2025}, note={feat/gr00t branch: Integrate NVIDIA GR00T Action Provider} } ``` ## License This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details. ## References - [Isaac Lab Documentation](https://isaac-lab.readthedocs.io/) - [NVIDIA GR00T](https://developer.nvidia.com/gr00t) - [Unitree G1 Robot](https://www.unitree.com/g1/) --- **Contact**: [erickun0125@snu.ac.kr] **Project Duration**: 2025 **Base Repository**: unitree_sim_isaaclab **Integrated Model**: NVIDIA GR00T-N1.5 VLA **Hardware Platform**: Unitree G1 29DOF