About Me

I am a robotics and AI researcher, currently leading Embodied AI at Hello Robot. At the past, I’ve worked at NVIDIA Research and at FAIR, the fundamental AI research arm of Meta. I’m interested in ways we can allow robots to work alongside humans to perform complex, multi-step tasks, using a combination of learning and planning.

I got my PhD in Computer Science in 2018 from the Johns Hopkins University in Baltimore, Maryland, focusing on using learning to create powerful task and motion planning capabilities for robots operating in human environments.

From 2018-2022, I was with NVIDIA, at their Seattle robotics lab. Recently, I have been working on approaches that tie together language, perception, and action, in order to make robots into robust, versatile assistants for a variety of applications. Other areas of interest include human-robot interaction.

In 2022, I joined the Embodied AI team at FAIR Labs. My work has looked at how we can make robots into useful, general-purposem mobile manipulators in homes. In particular, I pushed a challenge we called open-vocabulary mobile manipulation, or OVMM, which says robots should be able to pick and place any object in any environment. We ran a Neurips competition to encourage people to build reproducible robots which can perform this OVMM task.

You can also find a list of my papers on Google Scholar, or email me at chris.paxton.cs (at) gmail.com.



Invited Talks

Work Experience

  • Senior Robotics Research Scientist , NVIDIA (2020-present)
  • Robotics Research Scientist, NVIDIA (2019-2020)
  • Postdoc at NVIDIA, in their Seattle Robotics Lab (2018-2019)
  • PhD student at Johns Hopkins University (2012-2018): represening tasks for collaborative robots
  • Research Engineer Co-op at Zoox (2016-2017): planning for autonomous vehicles
  • Lockheed Martin (Summer 2012): security with mobile devices
  • US Army Research Lab (2010-2012): pedestrian detection
  • Johns Hopkins Applied Physics Lab: development of a zero gravity robot prototype


As of Spring 2018, I successfully defended my PhD in Computer Science at the Johns Hopkins University in Baltimore, Maryland. My PhD thesis is titled Creating Task Plans for Collaborative Robots, and it covers both our CoSTAR system and the novel algorithms we have created for creating robots that can use expert knowledge to plan. I did my research in the Computational Interaction and Robotics Lab with Greg Hager.

I did my undergraduate work at University of Maryland, College Park, where I got a BS in Computer Science with a minor in Neuroscience, where I graduated with University honors as a part of their Gemstone program for young researchers.

During Spring 2016, I led the JHU team for the KUKA Innovation Award competition. I led our successful entry into the KUKA innovation award competition with our updated CoSTAR system. CoSTAR integrates a user interface, perception, and abstract planning to create robust task plans. We have since used CoSTAR on a wide variety of problems.

Finalists at KUKA College Gersthofen
KUKA award finalists at KUKA College Gersthofen in December 2015.

Teaching Experience

  • Taught EN.500.111 “HEART: Making Robots our Friends: Technologies for Human-Robot Collaboration” in Fall 2015.
  • Teaching assistant for EN.600.436/636: “Algorithms for Sensor Based Robotics”, Spring 2015

Find me on Twitter, Github, or Mastodon!