search menu icon-carat-right cmu-wordmark

Responsible AI

Created October 2019 • Updated February 2022

Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can—and must—ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans?

Building Ethics into AI System Interactions

The Department of Defense names ethical use of AI as a Defense Strategy imperative. To ensure secure, ethical, and trusting interaction between humans and AI systems, we must create them to be

  • accountable to humans. We must build AI systems in ways that ensure that humans are always in ultimate control and responsible for all that the AI system will do.
     
  • cognizant of speculative risks and benefits. Long before humans are using an AI system, we must anticipate and evaluate risks to their personal information and decisions that affect their lives.
     
  • respectful and secure. We must ensure protection of privacy and data rights to gain trust and provide a secure system.  AI systems must require the bare minimum of information needed to do what is requested. They must be accountable and transparent to the humans who made them, monitor them, and use them.
     
  • honest and usable. We must design the system to value transparency; the goal is to engender the trust of everyone interacting with it. In addition, it’s important to explain system limitations—biases and weaknesses in the system—in language that the audience understands. An AI system should also provide visibility into itself, so that humans can easily discern when they are interacting with it and when the system is taking action and/or making decisions.

Despite many discussions in the AI field about ethics and trust, few frameworks are available to use as guidance when creating these systems.

A Human-Machine Teaming Framework to Guide Development

To contribute to promotion of the DoD imperative, the SEI has developed the Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences. When used with a set of technical ethics, it will guide the AI development team to create an accountable, secure, and usable system. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development.

Building a trustworthy artificial intelligent system requires a team that coalesces around a shared set of ethics. To be effective in this work, the development team must be diverse with regard to gender, race, education, thinking process, and disability status, as well as job role and skill set. The goal of bringing diverse individuals together is to reduce bias in the system and to account for a broad set of unintended consequences. For example, a diverse team can reduce the chances that the team creates solutions that reflect its own biasessuch as computer vision systems that only recognize white faces. Minimizing bias and unexpected behavior in AI systems is promoted in the Defense Strategy ethics resolution.

To support team efforts, the SEI HMT Framework suggests activities that promote understanding of people’s needs and enable the team to uncover potential issues before they arise. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The work requires deep conversations and agreement within the team and across the organization about issues as they come up, to align the team prior to facing a difficult situation. The framework provides a checklist that developers can use to ensure that needed qualities are built in to the system.

The goal of engaging the HMT Framework is clear expectations and mitigation plans for responding in constructive ways that protect people. AI is still evolving. This first step towards helping teams deal with the complexity inherent in these systems will be built upon as the work on AI systems progresses.

Learn More

What is Explainable AI?

January 17, 2022 Blog Post
Violet Turri

Explainable artificial intelligence is a powerful tool in answering critical How? and Why? questions about AI systems and can be used to address rising ethical and legal...

read

Implementing the DoD's Ethical AI Principles

January 13, 2022 Podcast
Alexandrea Steiner, Carol J. Smith

In this SEI podcast, Alex Van Deusen and Carol Smith, both with the SEI's AI Division, discuss a recent project in which they helped the Defense Innovation Unit of the U.S. Department of Defense to develop guidelines for the responsible use of...

learn more

Bias in AI: Impact, Challenges, and Opportunities

September 30, 2021 Podcast
Carol J. Smith, Jonathan Spring

Carol Smith discusses with Jonathan Spring the hidden sources of bias in artificial intelligence (AI) systems and how systems developers can raise their awareness of bias, mitigate consequences, and reduce...

learn more

Human-Centered AI

June 25, 2021 White Paper
Hollen Barmer, Rachel Dzombak, Matt Gaston, Jay Palat, Frank Redner, Carol J. Smith, Tanisha Smith

This white paper discusses Human-Centered AI: systems that are designed to work with, and for,...

read

Designing Trustworthy AI for Human-Machine Teaming

March 09, 2020 Blog Post
Carol J. Smith

Artificially intelligent (AI) systems hold great promise to empower us with knowledge and enhance human...

read

Designing Ethical AI Experiences: Checklist and Agreement

December 19, 2019 Fact Sheet

This document can be used to guide the development of accountable, de-risked, respectful, secure, honest, and usable artificial intelligence (AI) systems with a diverse team aligned on shared...

read