The UC Berkeley Center for Long-Term Cybersecurity (CLTC) published a paper to help organizations develop and deploy trustworthy Artificial Intelligence (AI) technologies. The paper complements the newly released AI Risk Management Framework, a resource developed by the U.S. National Institute of Standards and Technology (NIST).
Issued as part of the CLTC White Paper Series, the report is the result of a yearlong collaboration with AI researchers and multistakeholder experts. Jessica Newman, Director of CLTC’s AI Security Initiative (AISI) and Co-Director of the UC Berkeley AI Policy Hub developed the taxonomy based on an array of papers, policy documents, and extensive interviews and feedback, as well as an expert workshop that CLTC convened in July 2022.
The paper can help teams and organizations assess how they incorporate trustworthiness properties into their AI risk management process at different phases of the AI lifecycle. In addition to the NIST AI-RMF, the paper also connects the properties to select international AI standards, such as those issued by the Organization for Economic Co-operation and Development (OECD), the European Commission, and the European Telecommunications Standards Institute (ETSI).
Click here to read the original article published by the Center for Long-Term Cybersecurity (CLTC).
Please give your feedback on this article or share a similar story for publishing by clicking here.