Preview Mode Links will not work in preview mode

SEI Podcasts


Jun 4, 2021

The robustness and security of artificial intelligence, and specifically machine learning (ML), is of vital importance. Yet, ML systems are vulnerable to adversarial attacks. These can range from an attacker attempting to make the ML system learn the wrong thing (data poisoning), do the wrong thing (evasion attacks), or reveal the wrong thing (model inversion). Although there are several efforts to provide detailed taxonomies of the kinds of attacks that can be launched against a machine learning system, none are organized around operational concerns. In this podcast, Jonathan Spring, Nathan VanHoudnos, and Allen Householder, all researchers at the Carnegie Mellon University Software Engineering Institute, discuss the management of vulnerabilities in ML systems as well as the Adversarial ML Threat Matrix, which aims to close this gap between academic taxonomies and operational concerns.