Here is the more polished video made by our masters student, Baichuan Jiang.

You can see the DVRK being used as an automated tool for surgical debridement. We think that COSTAR’s capabilities will make it particularly interesting for people working in the area.

CoSTAR on the DVRK

Added some new videos of CoSTAR running on the DVRK, with one of our Robotics Masters students.

You can check them out here: DVRK CoSTAR pick and place, cutting tasks

CoSTAR user study

One of the most interesting things we have found out lately is that users actually like using Behavior Trees!

Now, I’ve spent a while researching them, and I think behavior trees are pretty nice. I was a little skeptical that ordinary people, not PhD students in computer science, would enjoy using the things – but so far we have had a great reaction from all the people involved in our newest user study. And CoSTAR, of course, continues to get better, easier to use, and more powerful as we add more capabilities to it.

One of my goals as a researcher is to understand what people need in order to be able to interact with robots. I envision a future where people interact with robots every day to accomplish a wide variety of tasks, whether it be in industry or elsewhere. For this to work, we need to be able to build robotic systems that don’t require a PhD to use, though.

Our latest study breaks tests subjects up into four different sets of users. They either get a version of the system with no perception or planning, with perception but no planning, with planning but no perception, or with perception and planning integrated through our high-level “SmartMove” functionality. Users found the first case easiest to understand, but tended to perform the best the more powerful capabilities we gave them.

This video shows what the different cases look like and explains our study a little bit more. It is intended as a companion for the paper above, so it won’t go into much more depth than this blog post.

Take a look at the preliminary version of the paper on ArXiv for more details if you’re curious. As a field, I think robotics has a long way to go – but we’re getting there.

CoSTAR Release

So we set up an initial CoSTAR release. The installation instructions available on Github should all work and be fairly easy to follow.

At this point, we have a working version of the system without many of the weird UI bugs we used to have. These included, in prior versions:

  • UI elements detaching
  • Massive memory leaks
  • Buttons that did not do what people expected.

In addition, Felix implemented some functionality that makes the robot move a little more smoothly and naturally when being servoed to arbitrary positions.

Take a look. Latest release is available here.

Updates to CoSTAR

So lately we have been making CoSTAR into a bigger, better, and more usable system, with all kinds of new capabilities.

One of the biggest things we have going on right now is an ongoing update of all the contents of the CoSTAR stack, the open source project containing all of our code. The UI is now a bit smother, SmartMove works better, and most of all we are continuing to work on support for ROS Kinetic and for the DVRK. Our testbed is still a dressed-up UR5, seen here:


We’ve been putting the system in the hands of ordinary, non-researcher people for one of the first times in quite a while. Our user interface is undergoing some refinements, but these days we’ve pretty much settled on using both RVIZ and a touchscreen-based UI for teaching the robot.

CoSTAR Interface

Take a look at the code if you are interested.

And all this stuff is in our paper:

  title={Co{STAR}: Instructing Collaborative Robots with Behavior Trees and Vision},
  author={Paxton, Chris and Hundt, Andrew and Jonathan, Felix and Guerin, Kelleher and Hager, Gregory D},
  journal={Robotics and Automation (ICRA), 2017 IEEE International Conference on (to appear)},
  note={Available as arXiv preprint arXiv:1611.06145},

Which I’ll be presenting at ICRA this year.