Transcript: Duane Pozza podcast

We discussed regulation on both sides of the Atlantic.

This is a transcript of the podcast Attorney Duane Pozza on AI regulation between attorney Duane Pozza and GRIP’s senior reporter Carmen Cracknell.

[INTRO]

Carmen Cracknell: Hello listeners. From Biden’s executive order last October to the EU’s proposed AI act, there’s a lot going on right now in the world of AI regulation and enforcement. Today we’re joined on the GRIP Podcast by Duane Pozza, an AI and emerging technology attorney at Wiley Rein to discuss this and more. It’s great to have you here Duane. Could you just start by talking about yourself and your background?

Duane Pozza: Well, thanks for having me Carmen. I’m happy to join today. So my background is that I have sort of always been a tech lawyer. Even in law school, I was on the tech law review and have always been interested in sort of what’s happening next in technology and the law.

Somewhat recently, around 2012, I went to the US Federal Trade Commission or the FTC from a law firm to focus on mobile technology, which at the time was the hot new thing and something that the government was very focused on in terms of thinking about how to approach it, how to potentially regulate it, potential issues that could happen.

And over time at the FTC, I worked more on financial technology or FinTech than eventually on AI and what folks thought of as big data at the time. So when I was at the FTC, we did, for example, a forum on AI and consumer protection issues in 2017. And then after that, I came back out to the private sector and I’ve been at a law firm (Wiley) since then, since 2018. And part of my focus has been on emerging tech and AI.

Carmen Cracknell: And what have you, you kind of addressed this a bit just now, but what did you learn from your time at the FTC that you’ve kind of taken over now into private practice?

Duane Pozza: Yeah, so I learned a lot at the FTC. I think, you know, particularly relevant for this discussion is that I was really struck by how folks in government look at new technology and think a lot about whether or not to try to address potential risks or other issues with them in advance, whether that be through regulation or an enforcement approach, or alternatively, whether or not to try to let the tech, you know, shake out a bit and see if there are potential issues to address.

So it’s a tough balancing act. It requires definitely humility around what you do or do not know about what’s around the corner on tech.

But also, I think, importantly, a pretty broad scope to look at the potential benefits of the technology, right? And who can be helped by, you know, exciting new advances in tech and making sure that, you know, if there is, for example, a regulatory approach, it’s not cutting off those benefits.

Carmen Cracknell: So kind of keeping the consumer sort of at the forefront and taking a preemptive approach was how things were done at the FTC.

Duane Pozza: Yeah, and I think it’s being really, you know, the FTC certainly has a process in place to be deliberate about thinking about what’s best for the consumer. And it is a judgment call when dealing with quickly moving technology, how to approach it, but, you know, keeping, you know, ultimately the consumer’s interest and, you know, also in sort of fair competition, you know, front and center, I think it’s an important part of that.

Carmen Cracknell: So right now, what are the biggest challenges do you think for clients, companies trying to comply with the latest regulations in tech and AI?

Duane Pozza: Yeah, good question. I think right now it’s just a fire hose of activity. You know, there was a period of time where there were a few agencies working on AI, super interesting stuff, but it wasn’t moving at the speed it is now.

The executive order from the Biden administration in October, 2023, and we can talk about that more. I mean, really supercharged what is going on on the US side in terms of different agencies. There’s deadlines that will occur throughout this year. Our tracker is over two dozen pages long, just tracking the deadlines.

So the challenge right now is cutting through the noise. It’s basically, you know, from a client perspective, you know, what’s my business doing with AI? And then what are the kinds of regulatory developments or even litigation developments that are going to affect it and trying to zero in on what those are. And it’s different for, you know, media companies, government contractors, fintechs, you know, depending on the vertical companies are in, they might need to pay attention to sort of different parts of what’s going on in the government.

Carmen Cracknell: Yeah. And you mentioned these deadlines going forward. Is that a new thing in this area of regulation kind of imposing these strict deadlines?

Duane Pozza: Yeah, it really has been. The sort of power of the executive order is that the administration could set very ambitious deadlines for different agencies to do different kinds of things on AI. Some of them are regulatory, some are sort of like research projects on certain aspects of it, some of it are standard-setting.

But all I think with the goal of kind of influencing how the private sector deals with AI. And, you know, they’re pretty tight. There are some that are just, you know, three or six month deadlines and and they all roll up within a year of the order. So, you know, October of this year.

Carmen Cracknell: So I know you’ve spoken a bit about this data bias. Everyone who I kind of speak to about AI has highlighted this is a huge issue. What’s being done in that area specifically to kind of combat it?

Duane Pozza: Yeah, so data bias is an issue that companies that are both deploying their own AI models or like developing and deploying their own AI models and companies that are sort of looking at AI tools that are out there in the marketplace to use them for their own purposes. It’s an issue they have to deal with. And I would say, you know, bias is one of a, you know, it’s a bucket of issues that there are other ones like, you know, privacy, accuracy, accountability, as well that companies have to navigate.

But just focusing on bias, you know, at bottom, the issue is that biases in the data set that are used to train AI models can show up in bias results. Basically, if the data has some sort of bias in it, which can reflect human bias as well, then there’s going to be issues with the results of it and the bias can show up in ways that can be detrimental or create legal risks.

So, you know, for folks who are sort of building AI or, you know, control the data sets that are being used for AI. Honestly, I think it involves an aspect of working with data scientists to understand the limitations of the data. But also, it’s not just a technical problem.

Ultimately, you need sort of, you know, lawyers, compliance folks or business folks in the room to understand what the potential biases could be and the outcomes that you’re trying to avoid, which in many cases are, you know, legal prohibitions on discrimination or bias. And it’s a little bit different for folks who, like, you know, a lot of companies, they’re looking to leverage AI, they’re not developing it themselves.

But they do need to ask questions, I think, around what does the data set that produced it look like, what biases might be involved in it. There’s a whole set of questions around IP as well, to making sure that when they use it, they’re not running afoul of legal risks.

And, you know, one thing I’m sort of watching on the regulatory side is, are there going to be potential requirements around third party audits, third party assessments that would include things like testing for bias that, you know, might be sort of required or expected so that folks who are using certain AI models can look at something that’s sort of more objective. Not saying it’s a good or bad idea from a regulatory perspective, but it is the kind of thing that’s being discussed, you know.

Carmen Cracknell: And these companies that are leveraging AI, trust is a huge issue with consumers. How can this be improved, do you think, the level of trust in companies?

Duane Pozza: Yeah, so there’s a lot of discussion around what is called trustworthy AI, as a general rubric of how AI should be approached. You also see it discussed in terms of like responsible AI. So, you know, I think really, you’ve hit on this idea of trust as being something that’s driving the discussion around how AI should be regulated or deployed.

And I think there’s, you know, there’s a number of core principles that folks talk about when they’re talking about what is trustworthy AI look like. And that includes things like ensuring that there’s adequate privacy protections.

AI safety is a big issue, making sure that it, you know, doesn’t, you know, it can’t be hacked or it doesn’t result in unintended outcomes that are causing problems, making sure it’s secure. And another element of it, which I think is pretty critical for trust, is an element of human oversight. So there’s also this sense among, I think, regulators and policymakers that at least when AI is used for certain things, there should be an adequate level of human oversight so that the AI is not just making the decision itself.

And of course, you know, humans make decision, make wrong decisions too. So it’s not just that. But, you know, and it can depend on what the AI is being used for. And I think folks are really focused on when it’s making decisions that have, you know, a substantial impact on individuals.

But there is an expectation, I think, among regulators, and you know, certainly some have said it that, you know, if something is going to go wrong in a way that creates a risk or potentially could create harm to individuals, the answer is not going to be because the AI did it, right? It’s going to, they’re going to ask the question like, you know, what were the humans behind the AI doing, and whether or not they were overseeing it. Yeah.

Carmen Cracknell: So humans still need to take accountability for what they’re programming. So beyond data bias, AI currently being used for commercial means, what are the other main sort of ethical problems involved in that?

Duane Pozza: So it’s a good question. You know, I think of AI as a tool, and tools can be used for good or bad, right? So, you know, at a high level, you see AI technology being used for things like voice cloning. It’s being used by, you know, fraudsters to try to trick people, right? So that’s a negative use of it, all the way to, you know, AI being used for quick translation, right?

So a similar kind of technology can be used to, you know, very quickly translate things for individuals’ benefit that are not otherwise translated. So I think about it from an ethical perspective, you know, companies that are looking to use it and deploy it. It’s really having an intentional framework about how it’s deployed.

I mean, obviously, you know, companies are not going to use it for fraudulent purposes, but they want to make sure that they understand that it can be used in a variety of different ways. There’s different risks than how it can be used, and try to put a framework in place. I’ve seen companies do it from the standpoint of an ethical code.

I’ve seen other companies do it just from a standpoint of, you know, best practices or a code of conduct. There’s certain different ways that it can be sort of mapped out, but, you know, the key element is being intentional and proactive on how AI gets deployed within a company.

Carmen Cracknell: Yeah, so we touched earlier on the executive order from last October. Could you talk in a bit more detail about that and also about how things are being done in other jurisdictions?

Duane Pozza: So that’s a good question. The executive order was massive, just one of the largest, if not the largest, of these executive orders we’ve seen. And it put into place, you know, work throughout the government, as I mentioned. What’s interesting is, you know, this is happening in parallel with the European Union in particular.

The EU is moving forward with what it calls the AI Act, which is a very significant piece of legislation that’s currently being, you know, the final details of it are being finalized. That is a much more regulatory approach than what you have in the US right now.

It will include prohibitions on certain kinds of use of AI, more extensive regulations on others. And I think of the EU AI Act as similar to sort of the GDPR and privacy. So basically Europe is leading the sort of regulatory charge and companies that are doing business across jurisdictions will need to take account of what Europe is going to require and then also try to peek around the corner of it to see what’s coming, you know, down the pike in the US in addition to other places like the U.K., which are really, I think, also charting their own separate course.

So there’ll be a lot over the next year, particularly as the AI Act becomes sort of more operational as a series of staggered deadlines. So that’s, I think, pretty critical. And one more thing I’ll mention, we can talk more about is within the US, it’s not just the executive order. The state to watch is California.

California, both on the legislative side and they have their privacy agency as well, that’s relatively new, are looking at regulations that would directly affect AI. And this is also, I think, similar to what happened in privacy, where California took the lead on privacy regulations with the CCPA in 2020. And you could see a potential repeat of that in AI, where even if the federal government’s moving, which it is now, California might try to jump ahead.

Carmen Cracknell:: Yeah, is that because the kind of tech hub of the US is in California?

Duane Pozza: Yeah, I think a lot of it is that California has a history of trying to get ahead on sort of tech regulation. And certainly, California is just very attuned to technological developments. And what’s interesting is you actually see a whole bunch of other states that are potentially going to try to follow the model or consider their own approaches.

And that sort of Patrick approach is pretty dangerous. But I will say in California that their existing privacy law does have some hooks in it to do some rulemakings on impact assessments and notice requirements around AI. So they’ve indicated that they’ll do rulemaking over the next year.

Carmen Cracknell: So could different states deviate quite a bit in how they approach AI regulation?

Duane Pozza: Yeah, so that’s a big concern. This is a concern that I do a lot of work in privacy, it has already manifested itself, which is you have this patchwork of state laws that have different requirements. What’s interesting is the kinds of proposals on the table are, they’re varied, but sort of one of the key proposals is like audit or assessment requirements, which I mentioned earlier. And these basically could be sort of requirements for risk assessments or other kinds of advanced impact assessments of AI, I think similar to a model you see in privacy or cybersecurity already. And those companies could see those on the horizon in California, and potentially other states as well.

Carmen Cracknell: So looking ahead just to this year, 2024, we have sort of touched on this. But what are the AI trends? And what are your predictions for regulation, legislation and deployment?

Duane Pozza: Sure, predictions are always tough. But I’ll kind of sketch it out a little. I mean, I think number one, look out for the states, something, states are going to be active. I mentioned California, sort of at a minimum, I think they have a hook to do things on the regulatory side, even if nothing passes out of the legislator.

So I think we’ll see more action over the coming year. Second, I think that whether or not any of these sort of audit or assessment requirements get passed, I think companies will increasingly think about doing their own kind of internal assessments around AI risk management.

There’s some great examples actually coming out of this agency called NIST in the Department of Commerce in the US that has put out a risk management framework that is meant to be voluntary and flexible, and for companies to use. And, you know, I’ve seen a lot of interest in that. They’ll be doing more work on that throughout the year, particularly on generative AI.

So I think you’ll see more companies thinking, well, I’ve got to put in place some kind of framework like that, try to get ahead of where this is going. And then the last piece is, I would keep an eye on enforcement.

So we’ve talked a lot about sort of new regulations that might apply to AI. But, you know, there’s a lot of existing laws out there, including anti-discrimination laws and laws against deceptive practices, for example. And, you know, the FTC, for example, my old agency has been very forthright in saying that they’re going to look around and apply these existing laws. And if they see potential issues with AI, they will approach them under their existing authority.

So I wouldn’t be surprised if we see more clues that FTC or other kinds of enforcement agencies are poking around AI a little bit and potentially, you know, moving forward on the enforcement side.

Carmen Cracknell: Well, we’ve covered quite a lot there. We still have a bit of time. So if you have anything else that I’ve missed or you think is important, feel free.

Duane Pozza: No, I think it’s an exciting area. It’s important to realize, I think, that AI is also not a monolithic technology, right? It can be used for all kinds of different things. And I think, you know, a key challenge for companies is figuring out how it’s actually being deployed. So, you know, this year, there was a lot of incoming interest on generative AI, right, which is used to generate different kinds of content from text or videos or images.

And, you know, that poses its own set of interesting challenges and risks. You know, folks want to use it for to help with marketing, for example, and that’s its own sort of, you know, area of potential risks, including IP risks.

Some AI is used for things like, you know, analytics or cybersecurity defense or fraud defense, or even financial services, where I do a lot of my work in a sort of predictive way. And those have its own sort of, you know, particularly like regulated legal risks as well. So I think one aspect of I’ve noticed when talking to the folks about who are thinking about AI and the organization is, you know, they have to start by asking, like, how’s it already being used?

How does my business want to use it? What are the good ways in which can help the business? And, you know, start from there.

And then I think try to sketch out then, you know, if it’s going to be used in these different kinds of ways, what are the risks I need to look out for? And what are the kind of policies I can put into place? And it differs depending on how AI is being used. But that’s also the fun of it.

Carmen Cracknell: Yeah, great. Well, thank you so much for chatting to me today, Duane.

Duane Pozza: Yeah, thanks for having me.

Listen to the audio.