ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
How to Talk Robot
Robots that don’t do what users want them to do have to go back to the shop for re-programming. But now there’s an algorithm that lets robots and non-roboticists communicate.

How do you tell a robot how to change its behavior? Programmers and roboticists do it all the time, of course. But most people don’t speak robot. As a result, when a robot fails to accomplish a task at the user level, its designers or programmers have to start from scratch.

But now a group of researchers at Massachusetts Institute of Technology (MIT) have come up with a method for laypeople to tell robots what they really want from them.

Traditionally, there have been two ways to teach a robot what to do. The first is to demonstrate a group of actions and have the robot copy them. The second—reinforcement learning—gives the robot a kind of virtual reward when it performs an action properly.

“Both of these methods suffer from a problem called distribution shift,” said Andi Peng, a Ph.D. student at MIT’s Computer Science and Artificial Intelligence Laboratory who spearheaded the project. “If you end up in somebody’s home, deploying the robot to do a task that it’s never seen before, you’re kind of screwed; the robot has no idea what’s happening. Usually, at that point, designers will kind of throw up their hands in the air and say, ‘Oh, I guess we should start over.’”

Peng and her colleagues created an algorithm that could be installed on a robot in a factory and deployed years later with consumers who will be able to get exactly what they want from the machine. They found that the most efficient way of communicating a desire to a robot in non-technical language is to have users say yes or no to counterfactual actions possibly explaining alternative scenarios.

Say a robot is told to grab a person’s favorite mug and ends up grabbing the wrong one. The robot doesn’t know what’s wrong, but it does know that there are only so many things that could be wrong. “Most objects have shape, size, and color,” said Peng. “We basically extract that information from the scene and then we can play all sorts of tricks with it.”

The robot can essentially ask its owner if the issue is a matter of, say, color, and then wait for an answer before moving on to an additional question if one is needed. “If we know that information, we can train the robot more efficiently,” Peng said. “We're taking the burden off of the human needing to explain exactly what the problem is.”

When confronted with some task-halting difficulty, or failure from the user’s point of view, the algorithm allows the robot to search for possible solutions, and then present them to the user as a simulated demonstration. The human watching what is in essence a question in the form of a video can communicate by just saying yay or nay. It’s a simple but highly efficient way of allowing humans and machines to speak to each other.

“The biggest thing for us was trying to figure out a form of explanation to which the human could then give us feedback that didn’t involve some translation,” said Peng. “What would be a common language where we could both demonstrate what the problem was and also extract the human feedback for fixing the problem?”

Peng and her colleagues tested the algorithm on a group of random non-roboticists in the area. “I basically said ‘Hey, we have this task, we want the robot to do this thing, but we’re not exactly sure what the problem is. Can you help figure it out?’” Having tried the algorithm extensively with herself and others in the lab, it performed with aplomb in the real world with real laypeople. 

Those tests were done with a robot simulation. The next step is to put the algorithm on a real robot. And, in fact, they have plans to put it on a legged robot in the near future.

Then, maybe, we can finally get robots that know how to listen.

Michael Abrams is a technology writer in Westfield, N.J.

You are now leaving ASME.org