NIST: Adversarial Machine Learning – A Taxonomy and Terminology of Attacks and Mitigations
The report outlines a taxonomy for adversarial machine learning, defining key terms and categorizing attacks—such as poisoning, evasion, privacy breaches, and prompt injection—for both predictive and generative AI systems. It discusses the trade-offs between security and performance and highlights challenges in balancing accuracy with adversarial robustness, aiming to guide standards and practices in securing AI systems.
NIST: Adversarial Machine Learning – A Taxonomy and Terminology of Attacks and Mitigations
Summary of Read Full Report (PDF)
This NIST report explores the landscape of adversarial machine learning (AML), categorizing attacks and corresponding defenses for both traditional (predictive) and modern generative AI systems.
It establishes a taxonomy and terminology to create a common understanding of threats like data poisoning, evasion, privacy breaches, and prompt injection. The document also highlights key challenges and limitations in current AML research and mitigation strategies, emphasizing the trade-offs between security, accuracy, and other desirable AI characteristics. Ultimately, the report aims to inform standards and practices for managing the security risks associated with the rapidly evolving field of artificial intelligence.
-
This report establishes a taxonomy and defines terminology for the field of Adversarial Machine Learning (AML). The aim is to create a common language within the rapidly evolving AML landscape to inform future standards and practice guides for securing AI systems.
-
The report provides separate taxonomies for attacks targeting Predictive AI (PredAI) systems and Generative AI (GenAI) systems. These taxonomies categorize attacks based on attacker goals and objectives (availability breakdown, integrity violation, privacy compromise, and misuse enablement for GenAI), attacker capabilities, attacker knowledge, and the stages of the machine learning lifecycle.
-
The report describes various AML attack classes relevant to both PredAI and GenAI, including evasion, poisoning (data and model poisoning), privacy attacks (such as data reconstruction, membership inference, and model extraction), and GenAI-specific attacks like direct and indirect prompt injection, and supply chain attacks. For each attack class, the report discusses existing mitigation methods and their limitations.
-
The report identifies key challenges in the field of AML. These challenges include the inherent trade-offs between different attributes of trustworthy AI (e.g., accuracy and adversarial robustness), theoretical limitations on achieving perfect adversarial robustness, and the complexities of evaluating the effectiveness of mitigations across the diverse and evolving AML landscape. Factors like the scale of AI models, supply chain vulnerabilities, and multimodal capabilities further complicate these challenges.
-
Managing the security of AI systems requires a comprehensive approach that combines AML-specific mitigations with established cybersecurity best practices. Understanding the relationship between these fields and identifying any unique security considerations for AI that fall outside their scope is crucial for organizations seeking to secure their AI deployments.
Related Articles
Gemini 3.1 Pro and the Case for Model-Agnostic Agentic Infrastructure
Google's Gemini 3.1 Pro doubled its reasoning benchmarks overnight. Here's why that makes model-agnostic agentic infrastructure more critical than ever.
Google Gemini 3.1 Pro, ChatGPT Ads, and Why Organizations Need to Own Their AI Infrastructure
Google launches Gemini 3.1 Pro with advanced reasoning while OpenAI rolls out ads in ChatGPT. These two moves reveal a growing tension in enterprise AI: who controls the intelligence layer, and whose interests does it serve?
ChatGPT Now Has Ads — And It Should Change How You Think About AI Infrastructure
OpenAI has started showing ads inside ChatGPT responses. This marks a turning point: organizations relying on consumer AI tools are now subject to someone else's monetization strategy. Here's why owning your AI infrastructure matters more than ever.
Gemini 3.1 Pro Just Dropped — Here's What It Means for Organizations Running Their Own AI
Google's Gemini 3.1 Pro launched today with 1M-token context, native multimodal reasoning, and agentic tool use. Here's why model releases like this one matter most to organizations that own their AI infrastructure — and why locking into a single provider is the costliest mistake you can make.
See the ibl.ai AI Operating System in Action
Discover how leading universities and organizations are transforming education with the ibl.ai AI Operating System. Explore real-world implementations from Harvard, MIT, Stanford, and users from 400+ institutions worldwide.
View Case StudiesGet Started with ibl.ai
Choose the plan that fits your needs and start transforming your educational experience today.