Using Popular LLMs for Static Analysis Alert Adjudication: For the 2025 DoW AI/ML Technical Exchange Meeting

Presentation
On January 15, 2026, Lori Flynn and Will Klieber presented this session at the Department of War (DoW) Artificial Intelligence/Machine Learning (AI/ML) Technical Exchange Meeting, in the Security and Safety track. They discussed work developed in the Line-funded research project “Using LLMs to Adjudicate Static-Analysis Results.”
Publisher

Software Engineering Institute

Abstract

Software analysts use static analysis as a standard method to evaluate the source code for potential vulnerabilities, but the volume of findings is often too large to review in their entirety. Large Language Models (LLMs) are a new technology that show promising initial results for automation of alert adjudication and rationales. This has the potential to enable more secure code, support mission effectiveness, and reduce support costs. This presentation discusses techniques for using LLMs to handle static analysis output, initial tooling we developed and our experimental results, related work by others, and directions for further development.