search menu icon-carat-right cmu-wordmark
quotes
2023 Year in Review

Supporting the Human and Technical Elements of Responsible AI for National Defense

Since the Department of Defense (DoD) adopted ethical artificial intelligence (AI) principles in 2020, the SEI has engaged with multiple defense agencies to support their responsible AI (RAI) implementations.

The Chief Digital and Artificial Intelligence Office (CDAO) leads DoD implementation of RAI policy and guidance and has engaged the SEI to create training for AI workforce development. The SEI has extensive knowledge and experience in the field of RAI and how it extends across different types of complex systems.

The SEI created two RAI curricula: one for future specialists in RAI system development work and another for those just getting familiar with RAI. The curricula helped inform a revision of the needed knowledge, skills, abilities, and tasks for the DoD’s data workforce and AI workforce. RAI is critical to many of the roles identified in the DoD Cyber Workforce Framework, and the CDAO is using the curricula to develop a stand-alone course on RAI principles and techniques for selected data and AI roles.

Education in assuring safe, ethical, and responsible AI is challenging. While previous software systems were somewhat static, AI dynamically combines data sets and connects systems, bringing new potential risks. The course provides relevant training and materials about these risks and many other RAI challenges.

The Defense Innovation Unit (DIU) is another agency concerned with RAI. It is the only DoD organization focused exclusively on fielding and scaling commercial technology across the U.S. military at commercial speed and scale. The SEI has been providing technical advising and support at all phases of the DIU pipeline as the organization reviews and evaluates potential vendors to address DoD mission needs, including the need for RAI solutions as stated in the DoD’s Ethical Principles for Artificial Intelligence.

Companies developing solutions often have not thought about these very ethically driven questions, such as harms modeling.

Sumanyu Gupta
Machine Learning Engineer and Team Lead, SEI AI Division
Photo of Sumanyu Gupta.

Part of this work supports DIU’s operationalization of Responsible Artificial Intelligence Guidelines in Practice, co-authored by SEI researchers. These guidelines include worksheets to help vendors better plan, develop, and deploy AI tools. Completing the worksheets enables vendors to develop their own test metrics and facilitates the SEI’s independent testing and evaluation of developed tools.

“Companies developing solutions often have not thought about these very ethically driven questions, such as harms modeling,” noted Sumanyu Gupta, machine learning engineer and team lead in the SEI’s AI Division. The SEI has been directly engaging AI solution vendors to consider ethical insights in everything from tool requirements to roadblocks. Using the worksheets can even suggest features that the vendors had not previously considered.

The SEI integrates RAI principles, AI fundamentals, software engineering and acquisition practices, and workforce development expertise to address the technical and human obstacles faced when planning, developing, and deploying AI systems. The work is also informed by the institute’s relationship with Carnegie Mellon University (CMU), such as collaborative research on explainable AI and fairness of AI systems. SEI AI researchers Carol Smith and Matt Hale participated in the 2023 CMU-organized workshops on the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework. Smith is also on the Advisory Council and Interim Leadership Team of the Block Center CMU Responsible AI Initiative. These ongoing connections enrich the SEI’s engagements with the CDAO and DIU, which support the U.S. military’s legal, ethical, and policy commitments to be responsible, equitable, traceable, reliable, and governable in its adoption of AI.