search menu icon-carat-right cmu-wordmark
2023 Year in Review

Assuring Trustworthiness of AI for Warfighters

Advances in artificial intelligence (AI), machine learning, and autonomy have created a proliferation of AI platforms. While these technologies have shown promise for advantages on the battlefield, developers, integrators, and acquisition personnel must overcome engineering challenges to ensure safe and reliable operation. Currently there are no established standards for testing and measuring calibrated trust in AI systems.

In 2023, the Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&E)) and the SEI launched a center aimed at establishing methods for assuring trustworthiness in AI systems with emphasis on interaction between humans and autonomous systems. The Center for Calibrated Trust Measurement and Evaluation (CaTE) aims to help the Department of Defense (DoD) ensure that AI systems are safe, reliable, and trustworthy before being fielded to government users in critical situations.

The human has to understand the capabilities and limitations of the AI system to use it responsibly.

Kimberly Sablon
Principal Director, Trusted AI and Autonomy, OUSD(R&E), Department of Defense
Kimberly Sablon

Since launching, CaTE has embarked on a multi-year project addressing the complexity and engineering challenges associated with AI systems while utilizing software, systems, and AI engineering practices to develop standards, methods, and processes for providing evidence for assurance and developing measures to determine calibrated levels of trust.

“The human has to understand the capabilities and limitations of the AI system to use it responsibly,” said Kimberly Sablon, the principal director for trusted AI and autonomy within OUSD(R&E). “CaTE will address the dynamics of how systems interact with each other, and especially the interactions between AI and humans, to establish trusted decisions in the real world. We will identify case studies where AI can be experimented with and iterated in hybrid, live, virtual, and constructive environments with the human in the loop.”

CaTE will be a collaborative research and development center and will work with all branches of the military on areas such as human-machine teaming and measurable trust. It is the first such hub led by a non-governmental organization. Carnegie Mellon University (CMU) has been at the epicenter of AI, from the creation of the first AI computer program in 1956 to pioneering work in self-driving cars and natural language processing.

“Developing and implementing AI technologies to keep our armed forces safe is both a tremendous responsibility and a tremendous privilege,” said CMU President Farnam Jahanian. “Carnegie Mellon University is grateful to have the opportunity to support the DoD in this work and eager to watch CaTE quickly rise to the forefront of leveraging AI to strengthen our national security and defense.”

Together with OUSD(R&E) collaborators and partners in industry and academia, SEI researchers will lead the initiative to standardize AI engineering practices, assuring safe human-machine teaming in the context of DoD mission strategy.

“When military personnel are deployed in harm’s way, it’s of the utmost importance to give them not only the greatest capability but also the assurance that the AI and autonomous systems they depend on are safe and reliable,” said Paul Nielsen, SEI director and chief executive officer. “Because of our work to define the discipline of AI engineering for robust, secure, human-centered, and scalable AI, the SEI is uniquely positioned to support this effort.”

For more information about the SEI’s AI engineering research, visit


Photos: U.S. Marines, U.S. Department of Defense