Science and Research Content

Shaping the Future: A Dynamic Taxonomy for AI Privacy Risks -


While there is a consensus AI should be regulated, and there are increasing efforts in this sense — such as the highly anticipated EU AI Act — the fact is AI privacy risks remain uncharted waters, leaving both AI practitioners and privacy professionals with a feeling of unease. AI technologies are advancing faster than regulation, but the first step toward a solution is naming things and mapping the surroundings.

Work conducted by a research group from Carnegie Mellon University and Oxford University in "Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks" amassed a dataset of 321 fact-checked AI incidents from 2013 to 2023, focusing on those involving privacy issues. The AI risks taxonomy is built from a well-established foundation: Daniel Solove's seminal 2006 paper, "A Taxonomy of Privacy."

By combining a regulation-insensitive approach with real-world, fact-checked incidents, the authors were able to curate a set of 12 distinct risks from Solove’s original 16, avoiding speculative or theoretical scenarios. This is a great starting point for privacy or AI governance professionals to build their risk assessment models to evaluate privacy risks in AI systems. The following is a summary of the 12 risks:

• Surveillance: AI exacerbates surveillance risks by increasing the scale and ubiquity of personal data collection.

• Identification: AI technologies enable automated identity linking across various data sources, increasing risks related to personal identity exposure.

• Aggregation: AI combines various pieces of data about a person to make inferences, creating risks of privacy invasion.

• Phrenology and physiognomy: AI infers personality or social attributes from physical characteristics, a new risk category not in Solove's taxonomy.

• Secondary use: AI exacerbates the use of personal data for purposes other than originally intended through repurposing data.

• Exclusion: AI makes failure to inform or give control to users over how their data is used worse through opaque data practices.

• Insecurity: AI's data requirements and storage practices risk data leaks and improper access.

• Exposure: AI can reveal sensitive information, such as through generative AI techniques.

• Distortion: AI’s ability to generate realistic but fake content heightens the spread of false or misleading information.

• Disclosure: AI can cause improper sharing of data when it infers additional sensitive information from raw data.

• Increased Accessibility: AI makes sensitive information more accessible to a wider audience than intended.

• Intrusion: AI technologies invade personal space or solitude, often through surveillance measures.

The taxonomy of AI privacy risks is not static — it's a living framework that must evolve with the AI landscape. Effective governance requires continuous adaptation, informed by collaborative research and cross-sector dialogue.

Click here to read the original article published by International Association of Privacy Professionals.

STORY TOOLS

  • |
  • |

Please give your feedback on this article or share a similar story for publishing by clicking here.


sponsor links

For banner ads click here