# VITA **Repository Path**: mirrors_Tencent/VITA ## Basic Information - **Project Name**: VITA - **Description**: The official implement of VITA, VITA15, LongVITA, and VITA-Audio. - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: VITA-E - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-09-08 - **Last Updated**: 2026-03-21 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and Acting
🌐 Project Page Β· πŸ“– Paper Β· πŸ€– Model Weights Β· πŸš€ Live Demo

| πŸ—ΊοΈ Overview | πŸ“Š Experimental Results | ⚑ Get Started | πŸ’» Inference & Demo | πŸ”₯ Training |


VITA-E can handle various complex interactive scenarios, including concurrency and nearly real-time interruption.
πŸ“½ VITA-E Demo Show! Here We Go! πŸ”₯

## πŸ—ΊοΈ VITA-E Overview
VITA-E Logo We are excited to present **VITA-E**, which incorporates a series of advancements: 1. **Dual-Model Framework for Seamless Interaction**. VITA-E introduces a groundbreaking dual-model core, where an "Active Model" executes tasks while a "Listening Model" stands ready for new commands. 2. **Innovative "Model-as-Controller" Paradigm**. We pioneer a "model-as-controller" approach where the Vision-Language Model is fine-tuned to generate special tokens that function as direct system-level commands, enabling precise, reliable, and immediate control over system actions. 3. **Smooth Human-Computer Interaction**. By this manner, VITA-E supports smooth two-way voice interaction, allows replies while executing, voice interruption during actions, and natural action transition. Besides, VITA-E supports both English and Chinese. 4. **Strong Performance in Critical Interactive Scenarios**. Tested on a physical humanoid robot, VITA-E demonstrated exceptional reliability and responsiveness. It achieves a high success rate across multiple interactive and operational tasks and is compatible with a wide range of mainstream VLA models.
## πŸ“Š Experimental Results - **Success rate comparison of VITA-E and baseline models on two fundamental manipulation tasks.**

- **Key Interactive Performance.**
Speech Interruption Task Switching Emergency Stop Avg. voice response latency
100% 93.3% 100% 2.26s
## ⚑ Get Started Install conda environment. ``` git clone https://github.com/VITA-MLLM/VITA-E cd VITA-E conda create -n vita_e python=3.10 -y conda activate vita_e pip install --upgrade pip pip install -r vita_e_requirements.txt pip install flash-attn --no-build-isolation ``` Download the required model weights to local path: [VITA-E](https://huggingface.co/VITA-MLLM/VITA-E). ```bash huggingface-cli download VITA-MLLM/VITA-E --local-dir checkpoints/VITA-E ``` ## πŸ’» Inference & Demo ### πŸ“ Inference Run the inference script. ```bash python inference_vita_e.py \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot ``` ### πŸ“ Demo #### Web Demo You can interact with our VITA-E web demo with mocked robot state data to experience the features, with no need of any embodied robot entity. (A total of 48 GB GPU memory is needed.) Prepare a VAD (Voice Activity Detection) module. You can choose to download [silero_vad.onnx](https://github.com/snakers4/silero-vad/tree/v4.0/files) and [silero_vad.jit](https://github.com/snakers4/silero-vad/tree/v4.0/files), and place these files in the `./demo/wakeup_and_vad/resource/` directory. ```bash python -m vita_e.server_vla_vita \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot \ --ip 0.0.0.0 \ --port 8081 ``` Wait about three minutes to completely load all modules. Open `127.0.0.1:8081` website on you server and enjoy it. #### Real Robot Demo Deploy server script on your server. ```bash python -m vita_e.server_vla_vita \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot \ --ip 0.0.0.0 \ --port 8081 ``` Start client script on the robot client. ```bash cd demo python vla_robot_client.py ``` ## πŸ”₯ Training Our VITA-E model is built upon the VITA-1.5 and Isaac-GR00T architectures. We leverage VITA-1.5 as the VLM component and integrate Isaac-GR00T's pre-trained diffusion action expert as the action model. The training process involves two stages: first, we fine-tune the VLM component and integrate it into the Isaac-GR00T framework by replacing the original VLM; then, we perform end-to-end fine-tuning on the complete model using VLA data. Please refer to [VITA-1.5](https://github.com/VITA-MLLM/VITA) and [Isaac-GR00T](https://github.com/NVIDIA/Isaac-GR00T) for more details. ## βœ’οΈ Citation If you find our work helpful for your research, please consider citing our work. ```bibtex @article{liu2025vitae, title={VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and Acting}, author={Xiaoyu, Liu and Chaoyou, Fu and Chi, Yan and Chu, Wu and Haihan, Gao and Yi-Fan, Zhang and Shaoqi, Dong and Cheng, Qian and Bin, Luo and Xiuyong, Yang and Guanwu, Li and Yusheng, Cai and Yunhang, Shen and Deqiang, Jiang and Haoyu, Cao and Xing, Sun and Caifeng, Shan and Ran, He}, journal={arXiv preprint arXiv:2510.21817}, year={2025} } ``` ## πŸ“œ More Research Explore our related researches: - **[VITA-1.5]** [VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction](https://github.com/VITA-MLLM/VITA) - **[VITA-1.0]** [VITA: Towards Open-Source Interactive Omni Multimodal LLM](https://vita-home.github.io/) - **[Awesome-MLLM]** [A Survey on Multimodal Large Language Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models) - **[MME]** [MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) - **[Video-MME]** [Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis](https://github.com/BradyFU/Video-MME) ## πŸ‘ Acknowledgments VITA-E is built with reference to the following outstanding works: [Isaac-GR00T](https://github.com/NVIDIA/Isaac-GR00T) and [Lerobot](https://github.com/huggingface/lerobot). Thanks!