2022 Year in Review
Codifying Test and Evaluation of Machine-Learning Aerial Object Detectors
Aerial object detection provides critical information for domains ranging from agriculture to humanitarian and disaster relief, as well as intelligence, surveillance, and reconnaissance in the national security domain. Machine learning (ML) automatically processes the huge amount of data from aerial imagery and helps human analysts extract actionable information. However, ML-enabled object detection is a fast-changing field, and the test and evaluation of aerial object detectors has not kept pace. Determining which aerial object detectors yield accurate results is important for organizations seeking to develop, acquire, or deploy this technology.
Development of systems that incorporate machine-learned models as core components to analysts’ workflows has been the research focus of Eric Heim, a senior research scientist in the SEI’s Artificial Intelligence (AI) Division. He and his team have studied the many considerations necessary for such systems’ design, production, and evaluation. In 2022, they completed the report A Brief Introduction to the Evaluation of Learned Models for Aerial Object Detection.
The SEI is uniquely suited. We have technical expertise on object detection, but we’re also familiar with organizations in domains that perform aerial imaging.
Eric HeimSenior Research Scientist, SEI AI Division
Evaluation covers the numerous decisions that go into training a detector, including the role of data, the choices involved in design, and the thresholds used to post-process outputs. Targeted evaluation involves experiments to measure the performance of detectors in specific ways to better inform stakeholders of how detectors behave in different settings. Because of the complexity of the object detection task and its intended environment of deployment, it is important to design evaluation procedures that provide test teams with quantifiable data reflecting specific performance characteristics.
Heim said that evaluation must focus on the requirements of the detector instead of the broad notions of performance typically seen in ML literature. “We focus on evaluation metrics, which are the computations used to measure the quality of a detector in a specific, quantifiable way. Each metric measures different characteristics, so it is important to understand what they are measuring specifically and how that relates to important requirements. We concentrate on performance characteristics associated with the quality of the detectors’ predictions.”
The insights in the Evaluation report will be useful to organizations involved in aerial object detection. “The SEI is uniquely suited in that we have technical expertise on object detection, but we’re also familiar with organizations in domains that perform aerial imaging,” Heim said. “We’re able to understand their problems in a way not available in the larger active community, and that allows us to provide assistance tailored to their specific mission space as opposed to object detection as a whole.”
Though the report’s scope is narrow, it answers calls from the defense sphere to strengthen AI test and evaluation techniques, as in the Defense Industrial Base report AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. Heim’s research also seeks to make more robust and secure AI, one of the three pillars of AI engineering, to be applied to all AI-enabled Department of Defense acquisitions.
Download A Brief Introduction to the Evaluation of Learned Models for Aerial Object Detection at https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=890521.
RESEARCHERS
Eric Heim (project lead), Tanisha Smith, John Zucca