icon-carat-right menu search cmu-wordmark

What Could Possibly Go Wrong? Safety Analysis for AI Systems

Podcast
SEI researchers discuss their work on System Theoretic Process Analysis, or STPA, a hazard-analysis technique uniquely suitable for dealing with AI complexity when assuring AI systems.
Publisher

Software Engineering Institute

Listen

Watch

Abstract

How can you ever know whether an LLM is safe to use? Even self-hosted LLM systems are vulnerable to adversarial prompts left on the internet and waiting to be found by system search engines. These attacks and others exploit the complexity of even seemingly secure AI systems.

In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Schulker and Matthew Walsh, both senior data scientists in the SEI’s CERT Division, sit down with Thomas Scanlon, lead of the CERT Data Science Technical Program, to discuss their work on System Theoretic Process Analysis, or STPA, a hazard-analysis technique uniquely suitable for dealing with AI complexity when assuring AI systems.

About the Speaker

Headshot of David Schulker.

David Schulker

David Schulker is a senior data scientist at the SEI. His current work includes projects for DoD clients focused on Large Language Model test and evaluation, statistical modeling to support zero trust implementation, and data architecture design. His past research has focused on using econometric and statistical techniques to analyze …

Read more
Headshot of Thomas Scanlon.

Tom Scanlon

Thomas P. Scanlon is a Principal Researcher and Technical Manager in the CERT Division of the Software Engineering Institute at Carnegie Mellon University.

He leads the CERT Data Science technical program which incorporates artificial intelligence, machine learning, and statistical analyses to develop solutions for cybersecurity challenges. Scanlon’s research interests include …

Read more