AISIRT Ensures the Safety of AI Systems
Created November 2023
The advance of AI promises enormous potential, but it also introduces new and dangerous risks. To fill the need for a capability that can identify, analyze, and respond to the threats, vulnerabilities, and incidents that emerge from the ongoing advances in artificial intelligence (AI) and machine learning (ML), the SEI developed the first Artificial Intelligence Security Incident Response Team (AISIRT).
AI Creates New Capabilities, but Also New Risks
The emergence of AI has created a new class of software techniques that offer unprecedented capabilities for solving difficult problems that directly affect the economic health, societal well-being, and security of the nation. These techniques can perform feats that once seemed unattainable for software, like finding patterns in complex data that humans are unable to detect on their own, or permitting a single individual to swiftly complete tasks that previously required entire teams.
The safe and effective adoption and use of these capabilities, however, is not guaranteed. Improper development, implementation, or use of AI can result in disastrous consequences, especially considering its widespread use in sectors like critical infrastructure or the military. In fact, several large-scale AI and ML vulnerabilities have already had far-reaching impacts and implications, and these events are likely to proliferate as AI rapidly evolves and more organizations embrace its potential to expand their frontiers.
To provide the U.S. with a capability for addressing the risks introduced by the rapid growth and widespread use of AI, the SEI formed a first-of-its-kind AISIRT.
AISIRT: A Collaboration Between the SEI and Carnegie Mellon University
The AISIRT is part of Carnegie Mellon's (CMU) coordinated effort to advance AI, and it involves collaborations between the researchers at the SEI and CMU's faculty, staff, and students. CMU is the leading academic and research institution in the disciplines of computer science and engineering, AI engineering, and cybersecurity.
AISIRT Provides Protection as AI Evolves
The SEI leveraged its expertise in cybersecurity and AI to establish the AISIRT, as well as its strong track record in the development of cyber response capabilities and team development across the globe over the last 35 years. The goal of the AISIRT is to lead a community-focused research and development effort to ensure the safe and effective development and use of AI technologies as these continue to evolve and grow.
Some of the challenges for maintaining effective monitoring of AI systems include identifying when AI systems are operating out of tolerance; whether they have been subjected to external tampering or attack; where defects occur that need to be corrected; and how to diagnose and respond to suspected or known problems. In addition, response capabilities require successful community and team building with both national as well as international organizations. The SEI delivers solutions to these challenges thanks to its technical expertise, but also thanks to its vast partnership network that includes software vendors like Google and Microsoft, as well as many AI and ML vendors, and organizations of all kinds, including military, government, industry, and academia.
Built from these foundations at the SEI, the AISIRT fills an immediate need to ensure that AI is safe, contributes to the growth of our nation, and continues to evolve in an ethical, equitable, inclusive, and responsible way.
May 15, 2023 Blog Post
This SEI Blog post examines how machine learning systems can be subverted through adversarial machine learning, the motivations of adversaries, and what researchers are doing to mitigate their...read
June 10, 2021 Podcast
Allen Householder, Jonathan Spring, and Nathan VanHoudnos discuss how to manage vulnerabilities in AI/ML systems....learn more
Adversarial ML Threat Matrix: Adversarial Tactics, Techniques, and Common Knowledge of Machine Learning
October 22, 2020 Blog Post
This SEI Blog post introduces the Adversarial ML Threat Matrix, a list of tactics to exploit machine learning models, and guidance on defense against...read
February 04, 2020 White Paper
Feedback to the U.S. National Institute of Standards and Technology (NIST) about NIST IR 8269, a draft report detailing the proposed taxonomy and terminology of Adversarial Machine Learning...read
November 22, 2019 Video
Allen D. Householder, Lujo Bauer (Carnegie Mellon University, Department of Electrical and Computer Engineering), Kathleen Carley (Carnegie Mellon School of Computer Science)
Watch as Dr. Matt Gaston, Director of SEI Emerging Technology Center, moderates discussion on countering adversarial operations made possible by AI...watch