icon-carat-right menu search cmu-wordmark

Using LLMs to Evaluate Code

Webcast
In this webcast, Dr. Mark Sherman summarizes the results of experiments that were conducted to see if various large language models (LLMs) could correctly identify problems with source code.
Publisher

Software Engineering Institute

Watch

Abstract

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

  • how well LLMs can evaluate source code
  • evolution of capability as new LLMs are released
  • how to address potential gaps in capability

About the Speaker

Headshot of Mark Sherman.

Mark Sherman

Dr. Mark Sherman is the Technical Director of the Cyber Security Foundations group in the SEI's CERT® Division at the Carnegie Mellon University Software Engineering Institute. His team focuses on foundational research on the life cycle for building secure software and on data-driven analysis of cybersecurity. Prior to joining CERT, …

Read more