Science and Research Content

CyLab Researchers Develop Taxonomy for AI Privacy Risks -


Privacy is paramount when developing ethical artificial intelligence technologies. However, as advances in AI outpace regulation, the responsibility for reducing privacy risks in goods and services that incorporate these technologies falls primarily on developers.

It is vital to define AI-driven privacy risks so developers can address them early in the research and development process. While a privacy taxonomy — a way of organizing data — with a well-established, research-driven foundation exists, groundbreaking AI advancement will likely bring with it unprecedented privacy risks.

New research from the School of Computer Science's Human-Computer Interaction Institute (HCII, Carnegie Mellon University) hopes to mitigate these risks. There's a lot of hype about what risks AI does or doesn't pose and what it can or can't do. But there's not a definitive resource on how modern advances in AI change privacy risks in some meaningful way, if at all.

In their paper, "Deep Fakes, Phrenology, Surveillance and More! A Taxonomy of AI Privacy Risks," Das and a team of researchers seek to build the foundation for this definitive resource. The research team constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. The team aimed to codify how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or did not meaningfully alter known risks.

Das and the team used previous work in this area as a baseline taxonomy of traditional privacy risks predating modern advances in AI. They then cross-referenced the documented AI privacy incidents to see how, and if, they fit within the previous taxonomy. The team identified 12 high-level privacy risks that AI technologies created or exacerbated.

"Our hope is that this taxonomy gives practitioners a clear roadmap of the types of privacy risks that AI, specifically, can entail," Das said. Das and the team will present their findings at the Association for Computing Machinery's (ACM's) Conference on Human Factors in Computing Systems (CHI 2024) in May in Honolulu.

Click here to read the original article published by Carnegie Mellon University.

STORY TOOLS

  • |
  • |

Please give your feedback on this article or share a similar story for publishing by clicking here.


sponsor links

For banner ads click here