EmbodiChain is an end-to-end, GPU-accelerated framework for Embodied AI. It streamlines research and development by unifying high-performance simulation, real-to-sim data pipelines, modular model architectures, and efficient training workflows. This integration enables rapid experimentation, seamless deployment of intelligent agents, and effective Sim2Real transfer for real-world robotic systems.
Note
EmbodiChain is in Alpha and under active development:
- More features will be continually added in the coming months. You can find more details in the roadmap.
- Since this is an early release, we welcome feedback (bug reports, feature requests, etc.) via GitHub Issues.
- 🚀 High-Fidelity GPU Simulation: Realistic physics for rigid & deformable objects, advanced ray-traced sensors, all GPU-accelerated for high-throughput batch simulation.
- 🤖 Unified Robot Learning Environment: Standardized interfaces for Imitation Learning, Reinforcement Learning, and more.
- 📊 Scalable Data Pipeline: Automated data collection, efficient processing, and large-scale generation for model training.
- ⚡ Efficient Training & Evaluation: Online data streaming, parallel environment rollouts, and modern training paradigms.
- 🧩 Modular & Extensible: Easily integrate new robots, environments, and learning algorithms.
The figure below illustrates the overall architecture of EmbodiChain:
To get started with EmbodiChain, follow these steps:
If you find EmbodiChain helpful for your research, please consider citing our work:
@misc{EmbodiChain,
author = {EmbodiChain Developers},
title = {EmbodiChain: An end-to-end, GPU-accelerated, and modular platform for building generalized Embodied Intelligence},
month = {November},
year = {2025},
url = {https://github.com/DexForce/EmbodiChain}
}@misc{GS-World,
author = {Liu, G., Deng, Y., Liu, Z., and Jia, K},
title = {GS-World: An Efficient, Engine-driven Learning Paradigm for Pursuing Embodied Intelligence using World
Models of Generative Simulation},
month = {October},
year = {2025},
journal = {TechRxiv}
}
