2020 Year in Review
Network Simulations Evaluate AI-Powered Network Defense Products
The cybersecurity field is being challenged with attacks of ever-increasing number, speed, and scale. All the while, millions of cybersecurity positions needed to secure and monitor networks remain unfilled. To close this gap, organizations are turning to artificial intelligence (AI) to augment the cyber workforce and speed its response to attacks.
The growth of AI has produced a profitable industry of AI-powered network behavior analysis (NBA) products. Adoption of these devices is accelerating, with over 100 products now on the market. But threat actors know this too, and they are adapting their attack methods to evade AI-based defenses. Little research exists on evaluating how well AI-NBA devices really work. The SEI’s Artificial Intelligence Defense Evaluation (AIDE) project, funded by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, is developing a rigorous, standardized methodology for testing AI defenses.
“A methodology for knowing if your system is secured as deployed is essential to secure operations,” pointed out AIDE team co-lead Grant Deffenbaugh. “This is hard with AI systems because their state at any given moment is fluid, and the reasoning behind their conclusions is often opaque.”
The big win of this project was testing a product as an organization would experience it, in the network it is protecting, while it is operating.Dr. Shing-hon Lau
Senior cybersecurity researcher, SEI CERT Division
The AIDE team created a virtual environment that represented a typical corporate network, using an SEI-developed software framework called GHOSTS that simulates user behaviors to generate network traffic. This traffic was used to train two commercial AI-NBAs. The AIDE team performed baseline testing by emulating malicious activity, observing whether the AI-NBAs could detect it, without attempting to obfuscate that activity. Both AI-NBAs detected the malicious activity under the baseline conditions.
The team then made the test more difficult by obfuscating the malicious activity, altering the attack traffic patterns to match activities similar to the virtual users’ job duties. Another test employed data poisoning, slowly introducing benign activities that generated traffic similar to that generated by the attack. This technique causes the AI-NBA to learn that attack traffic is normal traffic. Neither product was able to detect malicious activity in the presence of obfuscation or data poisoning. Results for both tests showed that a threat actor could use these methods to bypass detection on the network. AIDE team co-lead Shing-hon Lau surmised, “A sophisticated adversary could defeat the AI devices with a day or two of effort, and a state-level actor would have little trouble.”
This method of testing an AI defense against an actual attack on a simulated network in operation is more realistic than presenting an AI-NBA with typical synthetic traffic. Lau said, “The big win of this project was testing a product as an organization would experience it, in the network it is protecting, while it is operating.” Deffenbaugh added, “System administrators can use our approach to test a product in situ and verify that it behaves as expected.”
The results of this work could have broad applications. Using the methods developed in the SEI’s AIDE project, the Department of Defense could evaluate AI defenses to determine their suitability for deployment on its networks or those of the Defense Industrial Base. Future work will include evaluating more AI-NBAs, types of attacks, and types of networks. The AIDE team seeks collaborators to meet those goals.
To learn more, watch a presentation about AIDE from the SEI 2020 Research Review at resources.sei.cmu.edu/library/asset-view.cfm?assetid=651090.