search menu icon-carat-right cmu-wordmark

Explainable AI: Why Did the Robot Do That?

Created September 2017

When humans understand how autonomous systems work, they can better manage, team with, and use them in challenging environments. To help human users trust their robot counterparts in critical situations, we develop tools that allow autonomous systems to explain their behavior.

What Happens When You Work with Something You Don’t Understand?

The U.S. Department of Defense (DoD) plans to make greater use of autonomous systems to both help and protect the warfighter. Autonomous robots also protect and search for life during rescue efforts. In both contexts, robots and humans work together in unpredictable and sometimes dangerous environments. In these situations, human trust in the autonomous robot is crucial.

One major challenge to achieving trust in human–robot interaction and collaboration is that humans often do not understand the robot's behavior. Much of the robot's context sensing, analysis, and planning is invisible to its human team. Unable to see the rules and algorithms governing the robot's decision making, humans often do not understand why the robot acts the way it does. The human's lack of insight leads to loss of trust in the robot. In the time-sensitive and hazardous contexts of dismounted warfighters or search-and-rescue workers, users cannot afford to lose trust in the robots tasked with helping them.

Our Collaborators

On this project, we work with Siddhartha Srinivasa, of the Carnegie Mellon University Robotics Institute; Manuela Veloso, Head of the Machine Learning Department in the Carnegie Mellon University School of Computer Science; and Joshua Peschel, of the University of Iowa.

The exterior of an office building on a sunny day.

Helping Humans Understand Robots

The goal of the SEI's Why Did the Robot Do That? project is to increase users' trust in robots by providing them with English explanations of robots' behavior. Warfighters and first responders must trust their robots' actions to feel comfortable depending on them to locate a buried mine or help find survivors. When warfighters and first responders understand robot behavior, humans and robots can work together effectively to achieve their missions.

The project aims to understand what humans expect in explanations and generate new algorithms that allow robots to provide that information on demand. If the robot can generate an explanation of its behavior, it will increase users' trust in the autonomous system because users can anticipate what it is doing and why. The project has three main foci.

First, because there are so many different robots, sensors, and tasks, we need new algorithms to learn how to translate the diverse data, rules, and planning algorithms into human-understandable English. We are exploring crowdsourcing as a cheap and efficient way to generate many explanations quickly and then learn which types of explanations are best for each robot scenario.

Once we translate behaviors and observations to English meanings, we need to narrow down the explanations to what humans are interested in. Many factors from many sensors and algorithms can contribute to a robot's behavior. Too much information can confuse the user, just as too little information can. This project is investigating what types of information are important across different contexts, how much information is helpful, and at what threshold users experience information overload.

Finally, we test how explanations and their personalizations affect users' trust in robots in both in-lab experimental studies and deployments in the field to search-and-rescue teams. Human trust in robots will play a large role in the deployability of robots in the future. This work aims to increase user trust in robots to limit their abandonment and increase their use.

The SEI collaborates with members of the School of Computer Science at CMU on this project. Carnegie Mellon University Robotics Institute collaborator Siddhartha Srinivasa's research in shared autonomy has focused our crowd-sourced explanations and novel-action optimizations for robotic manipulators.

Manuela Veloso, Head of the Machine Learning Department, and her collaborative indoor robots navigate and act autonomously throughout the School of Computer Science building and use our verbalization algorithms to explain their behavior. In addition, Prof. Joshua Peschel, of Iowa State University, is implementing our algorithms on his small robotic boats and deploying them in high-fidelity field situations with search-and-rescue stakeholders to evaluate trust with explanations.

Looking Ahead

Why Did the Robot Do That? focuses on explanations of robot behavior after execution to reduce user confusion and increase trust. Our goal for the follow-on project What Will the Robot Do Next? is to proactively adapt robot behavior during execution to enable users to accurately predict what the robot will do next.

Learn More

Why Did the Robot Do That?

December 04, 2016 Blog Post
Stephanie Rosenthal

In this blog post, I describe research that aims to help robots explain their behaviors in plain English and offer greater insights into their decision...

read

Spatial references and perspective in natural language instructions for collaborative manipulation

August 26, 2016 Conference Paper
Shen Li (Carnegie Mellon University), Rosario Scalise (Carnegie Mellon University), Henny Admoni (Carnegie Mellon University), Stephanie Rosenthal, Siddhartha S. Srinivasa (Carnegie Mellon University)

In this work, we investigate spatial features and perspectives in human spatial references and compare word usage when instructing robots vs. instructing other humans....

read

Enhancing Human Understanding of a Mobile Robot’s State and Actions using Expressive Lights

August 26, 2016 Conference Paper
Kim Baraka (Carnegie Mellon University), Stephanie Rosenthal, Manuela Veloso (Carnegie Mellon University)

In this work, we present an online study to evaluate the effect of robot communication through expressive lights on people's understanding of the robot's state and actions....

read

Dynamic Generation and Refinement of Robot Verbalization

August 26, 2016 Conference Paper
Vittorio Perera (Carnegie Mellon University), Sai P. Selveraj (Carnegie Mellon University), Stephanie Rosenthal, Manuela Veloso (Carnegie Mellon University)

With a growing number of robots performing autonomously without human intervention, it is difficult to understand what the robots experience along their routes during execution without looking at execution logs. Rather than looking through logs, our goal...

read

UAV and Service Robot Coordination for Indoor Object Search Tasks

July 26, 2016 Conference Paper
Sandeep Konam (Carnegie Mellon University), Stephanie Rosenthal, Manuela Veloso (Carnegie Mellon University)

In this paper, we propose the concept of coordination between CoBot and the Parrot ARDrone 2.0 to perform service-based object search tasks, in which CoBot localizes and navigates to the general search areas carrying the ARDrone and the ARDrone searches...

read

Verbalization: Narration of Autonomous Robot Experience

July 09, 2016 Conference Paper
Stephanie Rosenthal, Sai P. Selvaraj (Carnegie Mellon University), Manuela Veloso (Carnegie Mellon University)

In this work, we address the generation of narrations of autonomous mobile robot navigation...

read