Created October 2019 • Updated February 2022
Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can—and must—ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans?
Building Ethics into AI System Interactions
The Department of Defense names ethical use of AI as a Defense Strategy imperative. To ensure secure, ethical, and trusting interaction between humans and AI systems, we must create them to be
- accountable to humans. We must build AI systems in ways that ensure that humans are always in ultimate control and responsible for all that the AI system will do.
- cognizant of speculative risks and benefits. Long before humans are using an AI system, we must anticipate and evaluate risks to their personal information and decisions that affect their lives.
- respectful and secure. We must ensure protection of privacy and data rights to gain trust and provide a secure system. AI systems must require the bare minimum of information needed to do what is requested. They must be accountable and transparent to the humans who made them, monitor them, and use them.
- honest and usable. We must design the system to value transparency; the goal is to engender the trust of everyone interacting with it. In addition, it’s important to explain system limitations—biases and weaknesses in the system—in language that the audience understands. An AI system should also provide visibility into itself, so that humans can easily discern when they are interacting with it and when the system is taking action and/or making decisions.
Despite many discussions in the AI field about ethics and trust, few frameworks are available to use as guidance when creating these systems.
A Human-Machine Teaming Framework to Guide Development
To contribute to promotion of the DoD imperative, the SEI has developed the Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences. When used with a set of technical ethics, it will guide the AI development team to create an accountable, secure, and usable system. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development.
Building a trustworthy artificial intelligent system requires a team that coalesces around a shared set of ethics. To be effective in this work, the development team must be diverse with regard to gender, race, education, thinking process, and disability status, as well as job role and skill set. The goal of bringing diverse individuals together is to reduce bias in the system and to account for a broad set of unintended consequences. For example, a diverse team can reduce the chances that the team creates solutions that reflect its own biases—such as computer vision systems that only recognize white faces. Minimizing bias and unexpected behavior in AI systems is promoted in the Defense Strategy ethics resolution.
To support team efforts, the SEI HMT Framework suggests activities that promote understanding of people’s needs and enable the team to uncover potential issues before they arise. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The work requires deep conversations and agreement within the team and across the organization about issues as they come up, to align the team prior to facing a difficult situation. The framework provides a checklist that developers can use to ensure that needed qualities are built in to the system.
The goal of engaging the HMT Framework is clear expectations and mitigation plans for responding in constructive ways that protect people. AI is still evolving. This first step towards helping teams deal with the complexity inherent in these systems will be built upon as the work on AI systems progresses.
January 13, 2022 Podcast
In this SEI podcast, Alex Van Deusen and Carol Smith, both with the SEI's AI Division, discuss a recent project in which they helped the Defense Innovation Unit of the U.S. Department of Defense to develop guidelines for the responsible use of AI.learn more
September 30, 2021 Podcast
Carol Smith discusses with Jonathan Spring the hidden sources of bias in artificial intelligence (AI) systems and how systems developers can raise their awareness of bias, mitigate consequences, and reduce risks.learn more
December 19, 2019 Fact Sheet
This document can be used to guide the development of accountable, de-risked, respectful, secure, honest, and usable artificial intelligence (AI) systems with a diverse team aligned on shared ethics.read