I am a robotics and AI researcher, currently leading Embodied AI at Hello Robot. At the past, I’ve worked at NVIDIA Research and at FAIR, the fundamental AI research arm of Meta. I’m interested in ways we can allow robots to work alongside humans to perform complex, multi-step tasks, using a combination of learning and planning.
I got my PhD in Computer Science in 2018 from the Johns Hopkins University in Baltimore, Maryland, focusing on using learning to create powerful task and motion planning capabilities for robots operating in human environments.
From 2018-2022, I was with NVIDIA, at their Seattle robotics lab. Recently, I have been working on approaches that tie together language, perception, and action, in order to make robots into robust, versatile assistants for a variety of applications. Other areas of interest include human-robot interaction.
In 2022, I joined the Embodied AI team at FAIR Labs. My work has looked at how we can make robots into useful, general-purposem mobile manipulators in homes. In particular, I pushed a challenge we called open-vocabulary mobile manipulation, or OVMM, which says robots should be able to pick and place any object in any environment. We ran a Neurips competition to encourage people to build reproducible robots which can perform this OVMM task. I am continuing in this direction at Hello Robot.
You can also find a list of my papers on Google Scholar, or email me at chris.paxton.cs (at) gmail.com. I’m very active on Twitter/X, where I post about my research and other interesting things in robotics and AI. I also have LinkedIn, Bluesky, and Mastodon, though these are updated quite a bit less. I post longer-form stuff on It Can Think, my substack blog.
News and Links
- AI models let robots carry out tasks in unfamiliar environments - MIT Technology Review
- How LLMs are Usering In A New Era of Robotics - VentureBeat
- This robot can tidy a room without any help - MIT Technology Review
- Meta’s OK-Robot performs zero-shot pick-and-drop in unseen environments - VentureBeat
- NeurIPS 2023 HomeRobot: Open Vocabulary Mobile Manipulation (OVMM) Challenge - AI Habitat
- Best paper finalist for SORNet - ICRA 2022 workshop on scaling robot learning - ICRA 2022
- SORNet nominated for best systems paper - CoRL 2021
- ALFRED Challenge results - CVPR 2021 - my team in 1st place - ALFRED
- Maker’s Dozen: NVIDIA Researchers to Present Advances in Human-Robot Interaction and More at ICRA - NVIDIA
- BBC: Dog training technique helps robot learn and other news - BBC
- Teaching Robots through Positive Reinforcement - TechCrunch
- New AI Trains Robots like Dogs - Psychology Today
- Dog Training Methods Help JHU Teach Robots to Learn New Tricks - Communications of the ACM
- Robotics Reaps Rewards at ICRA - NVIDIA
- NVIDIA Researchers Use AI to Teach Robots How to Improve Human-to-Robot Interactions - NVIDIA
- NVIDIA researchers use AI to teach robots how to hand objects to humans - VentureBeat
- Video Feature on the Seattle Robotics Lab
- NVIDIA Opens Robotics Research Lab in Seattle - NVIDIA
- JHU Robotics Team Wins KUKA Innovation Award - MD - Invited blog post by me
- The finalists of the Innovation Award 2016 have been selected - KUKA
- KUKA Innovation Award 2016: Flexible Manufacturing Challenge - KUKA
Papers
- Automated Generation of Robotic Planning Domains from Observations - IROS 2021
- Sim-to-Real Task Planning and Execution from Perception via Reactivity and Recovery (video) (experiments) - IROS 2021
- NeRP: Neural Rearrangement Planning for Unknown Objects - RSS 2021
- Reactive Human-to-Robot Handovers of Arbitrary Objects (video) (website) - ICRA 2021 Best HRI Paper winner
- Alternate Paths Planner (APP) for Provably Fixed-time Manipulation Planning in Semi-Structured Environments - ICRA 2021
- “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer (video) - IROS+RAL 2020
- Human Grasp Classification for Reactive Human-to-Robot Handovers (video) - IROS 2020
- Collaborative Behavior Models for Optimized Human-Robot Teamwork (video) - IROS 2020
- Transferable Task Execution from Pixels through Deep Planning Domain Learning - ICRA 2020
- 6-DOF Grasping for Target-driven Object Manipulation in Clutter (video) - ICRA 2020
- Motion Reasoning for Goal-Based Imitation Learning - ICRA 2020
- Online Replanning in Belief Space for Partially Observable Task and Motion Problems (video) (presentation) - ICRA 2020
- Conditional Driving from Natural Language Instructions - CoRL 2019
- Representing Robot Task Plans as Robust Logical-Dynamical Systems - IROS 2019
- The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints - IROS 2019
- Visual Robot Task Planning - ICRA 2019
- Evaluating Methods for End-User Creation of Robot Task Plans - IROS 2018
- Combining neural networks and tree search for task and motion planning in challenging environments - IROS 2017
- CoSTAR in Surgery: A Cross-platform User Interface for Surgical Robot Task Specification - IROS 2017 workshop
- CoSTAR: Instructing Collaborative Robots through Behavior Trees and Vision - ICRA 2017
- Do What I Want, Not What I Did: Imitation of Skills by Planning Sequences of Actions - IROS 2016
- Semi-autonomous telerobotic assembly over high-latency networks - HRI 2016
- Towards Robot Task Planning From Probabilistic Models of Human Skills - AAAI 2016 workshop
- An incremental approach to learning generalizable robot tasks from human demonstration - ICRA 2015
- A framework for end-user instruction of a robot assistant for manufacturing - ICRA 2015
- Developing predictive models using electronic medical records: challenges and pitfalls - AMIA 2013
Patents
- Robot Control using Deep Learning
- Imitation Learning System
- Trajectory Generation using Temporal Logic and Tree Search
Invited Talks
- Connecting TAMP to the Real World (video) - RSS 2020 Worksop on Learning in Task and Motion Planning
- From Pixels to Task Planning and Execution - IROS 2019 Workshop on Semantic Policis and Action Representations
- Building Reactive Task Plans for Real-World Robot Applications - IROS 2019 Workshop on Behavior Trees for Robotic Systems
Work Experience
- Leading Embodied AI at Hello Robot (2024-present)
- AI Research Scientist, FAIR Labs (2022-2024)
- Senior Robotics Research Scientist , NVIDIA (2020-2022)
- Robotics Research Scientist, NVIDIA (2019-2020)
- Postdoc at NVIDIA, in their Seattle Robotics Lab (2018-2019)
- PhD student at Johns Hopkins University (2012-2018): represening tasks for collaborative robots
- Research Engineer Co-op at Zoox (2016-2017): planning for autonomous vehicles
- Lockheed Martin (Summer 2012): security with mobile devices
- US Army Research Lab (2010-2012): pedestrian detection
- Johns Hopkins Applied Physics Lab: development of a zero gravity robot prototype
Fun Stuff
- Infinite Quiz Machine - Take bunch of personality quizes procedurally generated by AI! Learn what kind of crustacean you are, what kind of door you are, and more!
- Boat vs Kraken Game - Use the arrow keys to move the boat and avoid the kraken. Collect the coins for points. An experiment in using AI to generate a game.
You can also check out It Can Think, my substack blog, where I post about AI, robotics, and other interesting things.
Education
In Spring 2018, I successfully defended my PhD in Computer Science at the Johns Hopkins University in Baltimore, Maryland. My PhD thesis is titled Creating Task Plans for Collaborative Robots, and it covers both our CoSTAR system and the novel algorithms we have created for creating robots that can use expert knowledge to plan. I did my research in the Computational Interaction and Robotics Lab with Greg Hager.
I did my undergraduate work at University of Maryland, College Park, where I got a BS in Computer Science with a minor in Neuroscience, where I graduated with University honors as a part of their Gemstone program for young researchers.
During Spring 2016, I led the JHU team for the KUKA Innovation Award competition. I led our successful entry into the KUKA innovation award competition with our updated CoSTAR system. CoSTAR integrates a user interface, perception, and abstract planning to create robust task plans. We have since used CoSTAR on a wide variety of problems.
Teaching Experience
- Taught EN.500.111 “HEART: Making Robots our Friends: Technologies for Human-Robot Collaboration” in Fall 2015.
- Teaching assistant for EN.600.436/636: “Algorithms for Sensor Based Robotics”, Spring 2015