Science and Research Content

This NIST Trustworthy and Responsible AI Report Develops a Taxonomy of Concepts and Defines Terminology in the Field of Adversarial Machine Learning (AML) -


Artificial intelligence (AI) systems are expanding and advancing at a significant pace. The two main categories into which AI systems have been divided are Predictive AI and Generative AI. The well-known Large Language Models (LLMs), which have recently gathered massive attention, are the best examples of generative AI. While Generative AI creates original content, Predictive AI concentrates on making predictions using data.

AI systems need to have safe, reliable, and resilient operations as these systems are being used as an integral component in almost all significant industries. America’s National Institute of Standards and Technology (NIST) AI Risk Management Framework and AI Trustworthiness taxonomy have indicated that these operational characteristics are necessary for trustworthy AI.

In a recent study, a team of researchers from the NIST Trustworthy and Responsible AI has shared their goal of advancing the field of Adversarial Machine Learning (AML) by creating a thorough taxonomy of terms and providing definitions for pertinent terms. This taxonomy has been structured into a conceptual hierarchy and created by carefully analyzing the body of current AML literature.

The hierarchy includes the main categories of Machine Learning (ML) techniques, different phases of the attack lifecycle, the aims and objectives of the attacker, and the skills and information that the attackers have about the learning process. Along with outlining the taxonomy, the study has offered strategies for controlling and reducing the effects of AML attacks.

The team has shared that AML problems are dynamic and identify unresolved issues that need to be taken into account at every stage of the development of Artificial Intelligence systems. The goal is to provide a thorough resource that helps shape future practice guides and standards for evaluating and controlling the security of AI systems.

The terminology mentioned in the shared research paper aligns with the body of current AML literature. A dictionary explaining important topics related to AI system security has also been provided. The team has shared that establishing a common language and understanding within the AML domain is the ultimate purpose of the integrated taxonomy and nomenclature. By doing this, the study supports the development of future norms and standards, promoting a coordinated and knowledgeable approach to tackling the security issues brought about by the quickly changing AML landscape.

Click here to read the original article published by Marktechpost Media.

STORY TOOLS

  • |
  • |

Please give your feedback on this article or share a similar story for publishing by clicking here.


sponsor links

For banner ads click here