search menu icon-carat-right cmu-wordmark
quotes
2023 Year in Review

SEI Establishes First AI Security Incident Response Team

Artificial intelligence (AI) systems can perform amazing feats, but improper deployment or deliberate misuse of AI presents great risks. In 2020, SEI experts published the first machine learning (ML) vulnerability note as well as guidance for managing ML vulnerabilities. Three years later, they started evaluating a new crop of AI security incidents. Amid a rapid proliferation of AI in 2023, the SEI leveraged its expertise in cybersecurity and AI to field the first AI Security Incident Response Team (AISIRT).

AISIRT analyzes and responds to AI and ML threats and security incidents and researches incident analysis, response, and vulnerability mitigation. The team will also coordinate with Carnegie Mellon University experts to research new techniques that assure the security of AI platforms. AISIRT has a broad scope that covers AI systems for all purposes, from consumer applications to defense, national security, and critical infrastructure.

“Our research in this emerging discipline reinforces the need for a coordination center in the AI ecosystem to help engender trust and to support advancing the safe and responsible development and adoption of AI,” said SEI Director and CEO Paul Nielsen.

AI attacks or vulnerabilities in AI systems may be reported to AISIRT at kb.cert.org/vuls/report/.