ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
The Moral Beauty of a Disobedient Robot

Imagine directing your personal robot to perform a task and getting a response of: “No, I can’t do that.”

Don’t be surprised. As robots are integrated more and more into everyday life, teaching them to “disobey” or question a directive is just as important, if not more so, as teaching them to follow orders.

“That’s going to become a critical capability that all instructable robots will have to have,” says Matthias Scheutz, professor of computer science at Tufts University School of Engineering and director of its Human-Robot Interaction Laboratory.

Listen to episode 2 of ASME TechCast: How Engineers Close the Communication Gap

Scheutz is principal investigator on a Department of Defense-funded project, “Moral Competence in Computational Architectures for Robots,” a collaboration with Brown University and Rensselaer Polytechnic Institute. The researchers are exploring how to equip robots with reasoning tools, a sense for right and wrong, and what the consequences may be of their actions when facing real-world dilemmas, such as those that may result in death versus life.

In situations that are morally charged, where there is possible conflict of principles that you have to resolve, we want to understand whether robots are held to the same standards by society as people are.
Prof. Matthias Scheutz, Tufts University School of Engineering

“In situations that are morally charged, where there is possible conflict of principles that you have to resolve, we want to understand whether robots are held to the same standards [by society] as people are and what people even expect from robots,” Scheutz says.

Ultimately, researchers hope to design a computational architecture that allows robots to reason and act ethically in such situations. But Scheutz is quick to point out, “This is not a project that will have solved the problem at the end,” he says. “This is only a beginning.”

A network representation of prescribed actions in eight distinct scenes. Visualized using the Vibrant Data mappr tool. Image: Brown University

RPI is leading the work on logical frameworks, how to reason through dilemma-like situations, and Brown leads collecting empirical data, understanding norms, and what expectations people have for robots. Scheutz’s lab’s main focus is implementing the findings on the robotic platform, experimenting with algorithms and exploring what people think of them.

Robots today can be given the wrong instructions, intentionally or not, that could have harmful side effects to people, animals or property. These systems need to have an algorithm that takes into account moral values and when an instruction might violate those values, Scheutz says.

“This concept has become very prominent in the context of autonomous driving when an autonomous car has to make life and death decisions about who to run over,” Scheutz says.

If a driverless car “sees” a child run in front of the moving vehicle but cannot brake in time, for example, the question becomes whether the car should move forward or instead veer and crash into parked cars which could endanger the life of its passenger but could be a lower risk for injury than almost certainly killing the child?

“That kind of decision-making is not available now,” he says. “In autonomous cars, avoiding collisions is still the ultimate goal.”

Or consider a car stopped at a red light and there is no cross traffic. A car is approaching from behind at such a high speed that it could not stop before crashing into the stopped car. What should the front car do? A human driver, seeing the rapidly approaching car, would actively break the law and run the red light to avoid the accident. “There is no autonomous car now that can purposely break the law to avoid an accident,” he says.

Early findings gave researchers some insight into the psychological effects of agents and autonomous agents making decisions. For example, do people believe it is morally right or wrong for someone to take an action that would save the lives of several people while killing fewer rather than taking no action and killing more (the so-called trolley dilemma)? Additionally, do they have the same opinion when the action or inaction is taken by a robot. The researchers found that most people put more blame on the person for taking an action. In the same situation, though, the robot gets blamed more for not taking an action.

The researchers continue to work on understanding the difference and under what conditions opinions might be the same for both. They found that if the robot looks very human-like or if those making the judgment are told that the robot really struggled with the decision, then the difference of blame disappears.

“Our hypothesis is that people subconsciously can simulate the robot’s decision-making dilemma better when it has a human-like appearance or when they are told that it was struggling as a human would,” Scheutz says.

One of the many engineering challenges in robotics is dealing with uncertainty, such as deciding whether or not the robotic system is receiving the correct information from its sensors. That uncertainty also needs to be taken into account in moral reasoning and decision-making too.

“We don’t yet have a good way of factoring in uncertainty in comprehensive normative reasoning. We have the logical descriptions of normative principles, and we have a way of dealing with the specificity of the real world and uncertainty of the sensors, but the approach we have proposed so far does not scale,” Scheutz says“We can only handle a small number of conflicting norms. Yet, people have a large number of norms.”

Another challenge is with natural language processing. There are lots of ways people express moral judgments, reprimand others, blame others and respond to blame in ways that robots cannot. “It’s very likely that robots will be blamed because they will screw up, and they need to be able to interact and be able to make their justifications to people,” he says “We do not know how to do that yet.”

“The bar is really high” on laying the architectural foundation in the robotic control system to take into account a whole moral range of ethical questions where human norms apply, Schuetz says. Society expects people to abide by established norms. When they don’t, if a violation is illegal, they face consequences.

“Current machines have no such notion, and yet they are being increasingly deployed in human society and in social context where we are bringing into the interaction all of these expectations,” he adds.

Nancy Giges is an independent writer.

Read More: Breakthrough Makes Graphene Easier to 3D Print Making Sense ofReal-Time Factory Data Smart Bandage Does It All

You are now leaving ASME.org