One of the most interesting things we have found out lately is that users actually like using Behavior Trees!
Now, I’ve spent a while researching them, and I think behavior trees are pretty nice. I was a little skeptical that ordinary people, not PhD students in computer science, would enjoy using the things – but so far we have had a great reaction from all the people involved in our newest user study. And CoSTAR, of course, continues to get better, easier to use, and more powerful as we add more capabilities to it.
One of my goals as a researcher is to understand what people need in order to be able to interact with robots. I envision a future where people interact with robots every day to accomplish a wide variety of tasks, whether it be in industry or elsewhere. For this to work, we need to be able to build robotic systems that don’t require a PhD to use, though.
Our latest study breaks tests subjects up into four different sets of users. They either get a version of the system with no perception or planning, with perception but no planning, with planning but no perception, or with perception and planning integrated through our high-level “SmartMove” functionality. Users found the first case easiest to understand, but tended to perform the best the more powerful capabilities we gave them.
This video shows what the different cases look like and explains our study a little bit more. It is intended as a companion for the paper above, so it won’t go into much more depth than this blog post.
Take a look at the preliminary version of the paper on ArXiv for more details if you’re curious. As a field, I think robotics has a long way to go – but we’re getting there.