ICO launches consultation series on generative AI and data protection

This will seek to address the concerns and questions around the development and deployment of generative AI.

The UK Information Commissioner’s Office (ICO) has recently initiated a public consultation series focused on the intersection of generative Artificial Intelligence and data protection laws. This move aims to address the growing concerns and questions surrounding the development and deployment of generative AI technologies.

Generative AI: Data protection risks

Generative AI refers to models capable of exhibiting a broad range of general-purpose capabilities, such as creation of music, images and videos. Their capabilities are founded on the extensive datasets used to train them. Accordingly, whilst they offer huge potential benefits, the technology also raises significant data protection and privacy risks.

For example, there is widespread concern about the use of personal data to train AI tools, which increases the risks of data breaches and exploitation. 

Role of ICO in AI Regulation

The UK Government’s approach to AI Regulation is set out in the White Paper (see UK Government confirms position on AI regulation). The UK has taken a flexible approach, relying on existing regulators and regulatory frameworks to govern the use of AI within the UK, with targeted regulation being considered. The ICO specifically is being relied on as the regulatory office overseeing data protection within the UK; in the context of AI, the ICO’s subject matter expertise and experience means that it acts as an influential body in assessing AI risks, and advising on frameworks to mitigate these risks. 

See also by the same author: What should those in the UK, US and outside the EU know about the EU AI Act?

ICO Consultation: Key areas of focus

The ICO’s consultation series aims to provide clarity on several critical aspects of data protection law as they apply to generative AI. It has released a series of chapters outlining the agency’s emerging thinking in this respect, which include the following:

Lawful basis for training models

Determining the appropriate legal grounds for using personal data to train generative AI models, particularly when data is scraped from the web. 

  1. Purpose limitation: Exploring how the principle of purpose limitation should be applied throughout the lifecycle of generative AI, from development to deployment.
  2. Accuracy principle: Establishing expectations for ensuring the accuracy of data used and generated by AI models.
  3. Data subject rights: Clarifying how data subject rights, such as access and rectification, should be upheld in the context of generative AI.

The ICO will use the input received on these chapters to update its guidance on AI and other products.

Consultation process and takeaways

The ICO is inviting a wide range of stakeholders to participate in this consultation process. This includes developers and users of generative AI, legal advisers, consultants, civil society groups, and other public bodies with an interest in AI technology. The input gathered from these consultations will shape the ICO’s guidance on AI and data protection.

By seeking input from a diverse array of stakeholders, the ICO aims to ensure that the development and use of generative AI are aligned with data protection laws, ultimately fostering a responsible and trustworthy AI ecosystem.

Liz Smith is an associate in the Commercial team with a focus on technology and data; and Victoria McCarron is a Commercial & Technology solicitor, Burges Salmon.