GRIP Q&A: Tyler Thompson of Reed Smith on AI data privacy

Ahead of his talk at the AI Governance & Strategy Summit in Seattle, Tyler Thompson spoke to GRIP about AI strategy and future developments.

The evolving AI and data privacy landscape has pushed leaders to prepare for complex regulatory frameworks, safeguarding their organizations against emerging risks, mitigating cybersecurity threats, and ensuring ethical AI practices.

Tyler Thompson, partner in the Emerging Technologies group at global law firm Reed Smith, will be speaking at this year’s AI Governance & Strategy Summit in Seattle, a must-attend event for corporate leaders determined to drive innovation and shape the future in an era of rapid, AI-driven transformation.

Ahead of his panel, entitled The Future of Data Privacy in an AI-Driven World: Emerging Trends and Predictions, Tyler spoke to GRIP about how AI technology continues to advance, future trends in data privacy and how companies can prepare for these future developments.

How do you see data privacy regulations evolving in response to the rapid advance of AI?

Tyler Thompson.
Photo: Reed Smith

In the US, a key question in my mind is will the States follow Colorado’s example and pass AI regulations separate from their privacy laws, or use their current privacy laws as an enforcement mechanism on AI issues, or both. It is also possible that States yet to pass a comprehensive privacy law may instead pass comprehensive AI/Privacy hybrid laws.

Internationally, my bet is that the EU AI Act won’t have the same kind of impact that we saw with the EU’s GDPR, for example with multiple other key jurisdictions outside the EU adopting versions of it. The EU has rightly come under criticism for over-regulation killing members chances in the AI arms race, which is a key challenge facing regulators in all jurisdictions. I doubt other countries want to follow that example, especially with countries like South Korea forging their own path.

Regardless, the biggest challenge regulators face will be the extreme dynamism of both AI technology and its use cases. Regulators who struggled to understand and coherently address privacy concerns during the spread of comprehensive privacy regulations; you will now see them struggle even more with the frenetic AI landscape.

We’re seeing a push for AI regulation globally. How will this affect existing data privacy frameworks such as the EU’s GDPR or California CCPA?

Many existing privacy regulations have provisions that affect the AI space, such as automated processing, impact assessments, transparency requirements, deletion rights, and more. Regulators can use these tools in their push to regulate AI. That does mean that the current enforcement picture of those privacy laws would change, or at least expand.

More practically, it will be interesting to see if AI regulation enforcement will draw resources away from privacy enforcement. We have seen regulators struggle to both understand rapidly changing privacy implications and bring enough enforcement to create deterrence. I would expect more of the same, or perhaps worse, given the even more dynamic nature of AI.

What are your thoughts on the potential for new, AI-specific data privacy regulations, and what might those look like?

It is possible that the trends of AI regulation and privacy regulation intersect into an AI-specific privacy law. It is pure speculation on my part, but given the recent focus on training data and scraping, I suspect such a law would be narrowly tailored to preventing AI developers from training on personal information sources, scraped or otherwise.

It is more likely that a jurisdiction that does not yet have a comprehensive privacy law incorporates robust AI controls into the comprehensive privacy law it finally does pass.

AI regulations should focus on what, in my view, is the most successful piece of privacy regulation: transparency.

How are emerging technologies like federated learning and differential privacy changing the landscape of data privacy compliance?

Both technologies are exciting for what they could mean for the world of privacy. Federated learning should provide machine learning benefits that other industries have realized to those with more sensitive privacy concerns. Differential privacy, in my opinion, makes tried-and-true privacy compliance tools like anonymization and pseudonymization much more useful (or some might say finally actually practicable).

More importantly, though, is that both of these technologies show how the industry is adapting to protect privacy ahead of regulation. It is arguably driven by the market, with big players seeing privacy as a competitive differentiator and not just a regulatory “check the box.” Frankly, this is a trend that must continue, as I think regulation cannot be the only (or even main) thing ensuring privacy in the AI space.

You’re emphasizing proactive strategies. What are the most crucial steps businesses should be taking now to future-proof their data privacy practices in the AI era?

My biggest recommendation is first and foremost to actually understand your data privacy practices. We often see companies that do not have a good view of what they are actually doing with data, AI or otherwise. They have not created a data inventory, there is no data mapping, there is no data minimization.

It is not realistic to think privacy or AI regulatory risk can be managed without a detailed understanding of the company’s processing and AI usage. It is important that this understanding is not just point-in-time, either. A process needs to be set up to keep that understanding up to date. While understanding regulations and technology is of course important, those are things that third parties and outside counsel can assist with. Only the company itself can provide accurate information on its data, privacy, and AI practices.

How can companies build ethical AI frameworks that align with both regulatory requirements and consumer expectations?

Don’t reinvent the wheel. Be practical.

There are plenty of existing AI frameworks out there that target both goals, so while it may be daunting to start down the framework path, the simplest solution is to adopt something existing. Do not build, adopt. You will get efficiency, plus the ability to get knowledge and support from others using the same framework.

Practicality is also key. At this stage, I would rather see clients adopt a partial framework and do it well than try to be perfect and falling short. Companies should be realistic about not only their AI risk exposure but also their ability to actually implement controls.

How is AI affecting the realm of data privacy in the context of cybersecurity?

It is hard to understate the challenges that are coming for cybersecurity due to AI. In the near-term, generative AI can be used to make phishing and spear phishing attacks more effective and tailored to their targets. High quality deepfakes give a whole new dimension to these attacks. Long-term, a worst case scenario has AI eliminating the effectiveness of foundational cybersecurity technologies such as encryption. The bright side is that protections such as threat detection should also drastically improve.

The immediate need is for better training for employees and for security professionals. AI makes “old” security training measures such as red teaming, tabletops, and incident response tests more important, not less.

Finally, what are you most excited about in the intersection of AI and data privacy, and what gives you the most cause for concern?

I am excited about the amazing possibilities for humanity that AI presents! I am certainly an AI optimist overall. Conversely, my biggest cause for concern is that this early, aggressive push for broad regulation is misplaced. It can be easy to hyper-focus on the challenges AI has brought and will inevitably keep bringing, especially to privacy concerns.

However, bad regulation is often worse than no regulation. AI regulations should focus on what, in my view, is the most successful piece of privacy regulation: transparency. It is impossible to argue that informing individuals of the mechanisms and impacts of AI systems is either unreasonably difficult or completely ineffective. Allowing people to make their own choices regarding AI is ultimately the best regulation, but they need information in order to do so.