search menu icon-carat-right cmu-wordmark

AI Trust Lab: Engineering for Trustworthy AI

Created October 2019 • Updated April 2024

The Software Engineering Institute’s Trust Lab advances trustworthy, human-centered, and responsible AI engineering practices. The lab accelerates collaboration: we work with other experts and with stakeholder organizations to improve human interactions with AI systems and to create frameworks, tools, and guidelines to improve human interactions with AI-enabled and autonomous systems for DoD mission success.

Through both research and customer engagements, we conduct research and create tools to support trustworthy, human-centered, and responsible AI engineering practices by exploring the integration of responsible AI (RAI), human-computer interaction (HCI), user experience (UX), human-machine teaming (HMT), information architecture (IA), and other related practices and methods.

AI Must Work with—and for—People

AI holds great promise to empower us with knowledge and augment our effectiveness, but with that promise come challenges and risks. For example, bias in datasets can amplify discrimination, shortcomings in transparency and explainability can lead to mis- or dis-use of AI, and systems may reveal private or protected information without proper permission . We can—and must—ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations and operate in high-stakes contexts. How can AI development teams harness the power of AI systems and design them to be valuable to humans?

Trustworthy systems are built to fit with mission and user needs, use appropriate data, and are reliable, robust, and secure. The challenge of developing and designing trustworthy, human-centered AI systems is a key focus for the AI engineering discipline, and the SEI established the Trust Lab to advance practices in engineering for trustworthy AI.

Developing Measurements of Trustworthiness Through Collaboration and  Research

The Trust Lab collaborates with organizations, including the U.S. Defense Innovation Unit, the U.S. Chief Data and Artificial Intelligence Office (CDAO)  , DARPA, and NIST, to develop measurements of trustworthiness in a variety of areas:  

  • Use case and usability
  • Transparency and explainability
  • Equity, justice, and fairness
  • Likelihood of failure

Trustworthiness often depends on the data powering the AI system and the outcomes for people who use or are affected by the system. The Trust Lab team is currently creating tools to support data inspection and to help ML engineers and development teams to assess datasets fully to determine appropriateness. The Trust Lab is also examining aspects of negative bias, impacts, harms, and risk that can be introduced by an AI system and mitigating these aspects when needed.

Learn More

Contextualizing End-User Needs: How to Measure the Trustworthiness of an AI System

July 17, 2023 Blog Post
Carrie Gardner, Katherine-Marie Robinson, Carol J. Smith, Alexandrea Steiner

As potential applications of artificial intelligence (AI) continue to expand, the question remains: will users want the technology and trust it? This blog post explores how to measure the trustworthiness of...

read

Trust and AI Systems

August 11, 2022 Podcast
Carol J. Smith, Dustin D. Updyke

Carol Smith, a senior research scientist in human machine interaction, and Dustin Updyke, a senior cybersecurity engineering in the SEI’s CERT Division, discuss the construction of trustworthy AI systems and factors influencing human trust of AI systems....

learn more

What is Explainable AI?

January 17, 2022 Blog Post
Violet Turri

Explainable artificial intelligence is a powerful tool in answering critical How? and Why? questions about AI systems and can be used to address rising ethical and legal...

read

Implementing the DoD's Ethical AI Principles

January 13, 2022 Podcast
Alexandrea Steiner, Carol J. Smith

In this SEI podcast, Alex Van Deusen and Carol Smith, both with the SEI's AI Division, discuss a recent project in which they helped the Defense Innovation Unit of the U.S. Department of Defense to develop guidelines for the responsible use of...

learn more

Bias in AI: Impact, Challenges, and Opportunities

September 30, 2021 Podcast
Carol J. Smith, Jonathan Spring

Carol Smith discusses with Jonathan Spring the hidden sources of bias in artificial intelligence (AI) systems and how systems developers can raise their awareness of bias, mitigate consequences, and reduce...

learn more

Human-Centered AI

June 25, 2021 White Paper
Hollen Barmer, Rachel Dzombak, Matt Gaston, Jay Palat, Frank Redner, Carol J. Smith, Tanisha Smith

This white paper discusses Human-Centered AI: systems that are designed to work with, and for,...

read