EgoActor: Grounding Task Planning into Spatial-aware Egocentric
Actions for Humanoid Robots via Visual-Language Models

Beijing Academy of Artificial Intelligence (BAAI)
*Corresponding Authors

Abstract

Deploying humanoid robots in real-world settings is fundamentally challenging, as it demands tight integration of perception, locomotion, and manipulation under partial-information observations and dynamically changing environments. As well as transitioning robustly between sub-tasks of different types. Towards addressing these challenges, we propose a novel task — EgoActing, which requires directly grounding high-level instructions into various, precise, spatially aware humanoid actions. We further instantiate this task by introducing EgoActor, a unified and scalable vision-language model (VLM) that can predict locomotion primitives (E.g., walk, turn, move sideways, change height), head movements, manipulation commands, and human-robot interactions to coordinate perception and execution in real-time. We leverage broad supervision over egocentric RGB-only data from real-world demonstrations, spatial reasoning question–answering, and simulated environment demonstrations, enabling EgoActor to make robust, context-aware decisions and perform fluent action inference (under 1s) with both 8B and 4B parameter models. Extensive evaluations in both simulated and real-world environments demonstrate that EgoActor effectively bridges abstract task planning and concrete motor execution, while generalizing across diverse tasks and unseen environments.

Demo Videos

BibTeX

@article{bai2026EgoActor,
  title={{E}go{A}ctor: {G}rounding Task Planning into Spatial-aware Egocentric Actions for Humanoid Robots via Visual-Language Models},
  author={Yu Bai and Mingming Yu and Chaojie Li and Ziyi Bai and Xinlong Wang and Börje F. Karlsson},
  journal={arXiv: 2602.04515},
  year={2026},
  url={https://arxiv.org/abs/2602.04515}
}