icon-carat-right menu search cmu-wordmark
2024 Year in Review

Leading AI Security Incident Response

Artificial intelligence (AI) capabilities impact all corners of society and national security, yet cybersecurity processes have largely not been integrated into AI. Vulnerabilities occur throughout the complex, disparate AI ecosystem, and AI developers and researchers lack security tools and training. Meanwhile, attackers are leveraging this weak point to target AI-enabled assets.

In response to this threat, the SEI formed the first-of-its-kind Artificial Intelligence Security Incident Response Team, or AISIRT, to formulate tools, practices, and guidelines for AI cybersecurity. AISIRT members work with the governmental, industrial, and academic cyber community to identify, analyze, and respond to threats that affect AI systems and ensure the safe and effective development and use of AI technologies as they evolve and grow.

The AISIRT’s initial focus is vulnerability management for AI systems, built on the SEI CERT/CC’s capabilities for software cybersecurity: community-based intake, analysis, coordination, and disclosure of vulnerabilities. Since the SEI established the AISIRT in November 2023, the team has analyzed 103 community-reported AI vulnerabilities.

These cases have shown that while cybersecurity and vulnerability practices inform much of AI vulnerability management, AI has introduced new challenges. The deep, layered complexity of machine learning (ML) models and data within AI architectures complicates AI system security. New threats, such as model inversion and prompt injection, appear constantly. These emerging concerns, alongside the multi-vendor, dependency-heavy nature of most AI and ML environments, make coordinated vulnerability disclosure (CVD) more difficult.

In its first year, the AISIRT has learned valuable lessons, among which are four high-level takeaways:

  • AI poses new security issues, but it also shares traditional software cybersecurity concerns.
  • Software engineering is just as important for AI systems as for traditional software.
  • Coordination and disclosure are the most important parts of CVD.
  • Fixing an AI problem is more important than deciding whether it meets the definition of a vulnerability.

The AISIRT is working to identify the challenges of CVD for AI and ML systems and call the community to action. AISIRT experts are also working with researchers across the SEI and Carnegie Mellon University to provide other capabilities that advance the security and safety of AI: incident response; vulnerability discovery; situational awareness; identification of best practices, standards, and guidelines; and establishing a community of practice.

“We are working to extend cybersecurity best practices, such as coordinated vulnerability disclosure, to AI,” said Lauren McIlvenny, who leads the AISIRT as the technical director of threat analysis in the SEI’s CERT Division. “We are also performing cutting-edge research to stay ahead of the expanding set of critical issues and attack vectors born of the rapid adoption of AI-enabled systems in consumer, commercial, and national security applications.”

In the long term, the AI community should invest in research that develops and improves processes, procedures, and mechanisms to prevent vulnerabilities from being introduced into AI systems in the first place. Such investments should develop vulnerability identification tools for AI security researchers and training for AI developers on secure development practices. The AISIRT is well positioned to support these investments and respond to incidents caused by AI vulnerabilities.

“Cybersecurity has always been a community activity,” said Greg Touhill, director of the SEI’s CERT Division. “AI vulnerabilities bring a new set of challenges to cybersecurity. That’s why expanding the cyber neighborhood watch to include AI requires the kind of expertise, research, and trusted leadership that is foundational to the AISIRT mission.”

Learn more about the AISIRT’s Lessons Learned in Coordinated Disclosure for Artificial Intelligence and Machine Learning Systems. Report vulnerabilities in AI or traditional software to the AISIRT and CERT/CC at kb.cert.org.


Photo: U.S. Navy, Mass Communication Specialist 1st Class Benjamin A. Lewis