Robot AI Researcher – Advanced Robotics

As a Robot AI Researcher, you will spearhead innovation in Vision-Locomotion-Action (VLA) systems for quadrupedal and humanoid robots by developing advanced AI models! Be part of a multidisciplinary culture with a diverse mindset! As the largest robotics company based in India, Addverb offers the opportunity to work alongside brilliant minds and be part of a collaborative, dynamic culture. With international exposure and a flexible work environment based on freedom with responsibility, Addverb offers endless opportunities for growth and learning.

Role

The purpose of this role is to spearhead innovation in Vision-Locomotion-Action (VLA) systems for quadrupedal and humanoid robots by developing advanced AI models that seamlessly integrate visual perception, motion planning, and action execution. The Robot AI Researcher will be responsible for creating intelligent and adaptive control systems that enable robots to operate effectively in complex, dynamic, and unstructured environments, contributing to the next generation of embodied AI and autonomous robotics, bridging research with impactful deployment.

  • EMEA
  • Advanced Robotics
  • Full-Time Role

Responsibilities

 
    • Design and implement VLA pipelines that integrate visual perception, locomotion control, and action planning for legged and humanoid robots.
    • Develop learning-based control policies using reinforcement learning, imitation learning, and model-based approaches.
    • Build robust sensor fusion systems that combine vision (RGB, depth, and event cameras), IMUs, and proprioception for real-time decision-making.
    • Conduct sim-to-real transfer research to ensure scalable deployment of AI models from simulation to physical robots.
    • Collaborate with hardware teams to optimize AI models for onboard compute and real-time performance.
    • Publish research in top-tier venues and contribute to open-source tools and datasets.

Key Skills, Qualifications, and Required Years of Experience

 
  • Strong experience in embodied AI, particularly in VLA systems for legged or humanoid robots.
  • Proficiency in Python and C++, with experience in PyTorch or TensorFlow.
  • Hands-on experience with robotic platforms such as Unitree, ANYmal, Boston Dynamics, or Digit.
  • Familiarity with simulation environments (e.g., Isaac Gym, MuJoCo, Gazebo) and ROS/ROS2.
  • Experience with end-to-end visuomotor policy learning.
  • Knowledge of bio-inspired locomotion and whole-body control.
  • Experience with multi-agent or multi-task learning in robotics.
  • PHD or Master’s in Robotics, Computer Vision, Machine Learning, or a related field (either pursuing or completed recently).