1 min read
As AI becomes increasingly popular, organisations are grappling with the reality of how to balance AI design and deployment with data privacy. The challenge for organisations using AI is that a number of the typical characteristics of AI seem, at least at first glance, to be at odds with the main principles of data protection law. How, for example, can you satisfy the data minimisation principle if your AI tool decides what information it will use from a large data set? And how can you be transparent if you do not know why or how a decision (relating to a loan, or job, application for example) was reached?
In this briefing, Rob Sumroy and Natalie Donovan look at the particular privacy risks AI may raise, and how the ICO is responding to those risks. Rob is global co-head of our Data Privacy hub and Natalie is PSL Counsel in our Emerging Tech team.
This briefing is part of our Regulating AI series. See our Regulating AI hub for more details. The series looks at AI issues relating to competition, IP, employment, data privacy, financial services, ESG and M&A.
This publication was first published in October 2022, and updated in March 2023.