Science and Research Content

Taxonomy of Adversarial Machine Learning -


Last month, the National Institute of Standards and Technology (NIST) released its Draft NISTIR 8269, A Taxonomy and Terminology of Adversarial Machine Learning. The intent behind releasing the draft was to assist teachers and practitioners develop a single lexicon for Adversarial Machine Learning (AML) and set the standards and best practices for managing the security of Artificial Intelligence (AI) systems against attackers.

AML is and will increasingly be a tremendous challenge in securing AI systems. Developing a taxonomy and terminology of AML is a step toward securing AI applications especially against adversarial manipulations of Machine Learning (ML). Although AI also includes various knowledge-based systems, the data-driven approach of ML introduces additional security challenges in training and testing (inference) phases of system operations.

AML is concerned with the design of ML algorithms that can resist security challenges, the study of the capabilities of attackers, and the understanding of attack consequences. As a result, NIST’s taxonomy is organized around three concepts that inform a risk assessment of AI systems: attacks, defenses, and consequences.

Differing from previous surveys, the draft NISTIR includes consequences as a separate dimension of risk, because the consequences of AML attacks depend on both the attacks and the defenses in place, and may not be consistent with the original intent of an attacker. Additionally, with its new taxonomy, NIST demonstrates its continued interest in playing a large role in setting standards and best practices for managing AI security.

Click here to read the original article published in LEXOLOGY.

STORY TOOLS

  • |
  • |

Please give your feedback on this article or share a similar story for publishing by clicking here.


sponsor links

For banner ads click here