This is a transcript of the podcast Matt Worsfold on global AI, the EU AI Act, and the FCA AI sprint between Commissioning editor Jean Hurley and Matt Worsfold, partner in Ashurt’s Risk practice.
Jean Hurley: Hello, listeners! I’m Jean Hurley, Commissioning editor at GRIP today in the grip Podcast we’re joined by Matt Worsfold, partner in Ashurt’s Risk practice.
Jean Hurley: Matt welcome to the GRIP Podcast.
Matt Worsfold: Thanks so much for having me.
Jean Hurley: For our listeners. Could you give us a brief overview of your role at Ashurt’s Risk Advisory? And what sparked your interest in the intersection of data, regulation, and AI?
Matt Worsfold: Yeah, absolutely. So. I’m a partner in Ashurst Risk Advisory. For those who aren’t familiar risk advisory is the consulting arm of of Ashurst, the law firm. And for me in particular, I run the data analytics practice here in in London.
Matt Worsfold: We work alongside with our lawyers to support our clients with mainly their use and governance of data, and this is in order to help them manage risks meet regulatory requirements and ensure compliance.
I started out as a data analyst so very hands on with data coding development. But as the use of data has become increasingly more regulated, increasingly more scrutinized and coupled with the significant technology advancements in analytics and AI, it’s now become much more important to start supporting our clients businesses to better realize the value of data and AI and the investments they’re making in data and AI by managing its risks and ensuring good governance and enabling compliance. And that’s what’s really kind of brought together a couple of passions of mine around. As I say, that background in data analytics with the hands-on experience. And then that broader work around risk management, regulatory governance, compliance, and then now coupled with.
As I said, the work that I do alongside our lawyers in the firm as well, so it brings a quite a few different approaches and vantage points together into into one place, but a but a very topical one for a lot of our a lot of our clients at the moment.
Jean Hurley: Thank you, absolutely. So you’ve got a unique vantage point, really. So what are some of the significant shifts you’ve witnessed in this space?
Matt Worsfold: Yeah, I think we’ve seen quite a few big trends in in data come and go over the years. I think when I started out my career, the machine learning was the big thing and AI was emerging. I think obviously now, the worlds of machine learning and AI have sort of morphed into one. And machine learning is kind of now considered a subset of AI. We saw big developments in graph, databasing big data was a thing a long time ago. The cloud, if we rewind a number of years, was a big thing, and some of those have stuck around, others have drifted as concepts.
But you know, it’s a bit of a cliche, and probably the expected answer. But generative AI is the transformational technology of our generation. It’s fundamentally changed the game when it comes to the use and understanding and access to data. If I kind of rewind back in my career it took a long time to progress the skill set as a data analyst from, you know, doing kind of what I would call advanced analytics into the world of machine learning and and data science. And and it took a lot in terms of an upskill and development and learning to to start to progress into that world of machine learning and AI. And that’s why, for a lot of businesses, you know, very dedicated specialized teams, data science teams who are highly technical. And that was the way in which you developed and and were able to build and deploy AI.
And what generative AI has done is just completely democratized not only the use of data, but also this kind of advanced world of analytics. And so it’s no longer now the purview of you know these data science teams who have, you know, upskilled over years, you know, it’s now put AI into the hands of everyday users, which is fantastic. Because again, a lot of the trends have been around democratizing data and democratizing access to data. This has absolutely turbocharged that trend. And so you know, you now don’t need to know how to code any more, to use AI or to even build AI, it’s become a bit more of a technology view. But you know, first and foremost, it’s really at the fingertips of anybody and that’s really changed the game significantly when it comes to the field of data analytics because I say it’s democratized it significantly.
Jean Hurley: Absolutely. So if we dive into the global picture, can you give us an overview of where we are to date with global AI regulation and what are the major regions and frameworks that we should be aware of?
Matt Worsfold: Yeah, absolutely. Absolutely. Very diverging, I think. Views and trends globally driven by geopolitics which maybe we can. We can touch on. I mean the big one to cover is the EU. It’s the key starting point. Now, it’s probably the key starting point when it comes to a conversation around the regulation of AI, obviously the AI Act, which which came into effect last year. There were provisions that were added in February, and we’ve got more coming in in August as well.
But it is the most prescriptive and the most stringent regulation globally on AI. And it’s really far reaching, and it impacts a really large number of businesses. And it’s got this extra territorial reach that the European Commission are using for a number of their pieces of regulation. And it basically says, if it touches, or if your AI system touches anyone in the EU in any way, then you’re in scope effectively, and therefore it’s quite wide reaching.
It’s got some quite clear requirements, and and those largely fall upon requirements to go and identify AI systems and then risk, assess them, and then implement what they kind of call in compliance requirements. But they’re essentially controls off the back of that assessment mainly for high risk systems. But there are some for limited risk systems. And so it’s, it’s quite a structured approach when it comes to essentially risk management of AI. But that is the most stringent.
The US – just an interesting time in general. Geopolitically, they had their executive order under Biden’s administration that’s now been revoked. There’s nothing quite filling its place again. An interesting time. We’ll see where that starts to evolve and develop, and it’s looking like lighter touch when it comes to regulation.
China’s got regulation now on generative AI in particular, around the deployment and use of generative AI, which is quite interesting.
And the UK again you know, talking about a pro-innovation approach. But you know the indications are we’re not going to see anything like the AI act in the UK. They’ve talked about regulating what they call the most powerful AI models which we can largely interpret as generative AI but it remains to be seen really, specifically how how that will play out. And then the second thing that they’ve said is essentially the sector specific regulators will now be able to take charge in terms of what they want to do in terms of regulating AI in specific sectors. So, quite yeah, quite diverging views, I would say globally.
Jean Hurley: Yeah, absolutely. I mean, as you touched upon, the US and the UK appear to be going for deregulation. And so also the another thing the US are doing this, they’re saying, well, we’re not going to follow what the EU are doing. So what do you think of the implications of this, and if you’ve got the EU doing one thing, us and UK doing something else
Matt Worsfold: Yeah, yeah, it’s it’s really fascinating to to watch it play out. And there’s definitely a tension there. I would say the tension more so from a US EU perspective rather than UK EU. But the approaches do diverge. So a lot of tension and you know largely for the UK. And us. It comes down to the kind of pro-innovation approach. I think that’s probably more profound in the US. Than in the UK. But again, we’re seeing the UK Government really push quite hard on that kind of pro innovation, trying to take measures to drive growth in the economy.
And that’s everything, including, you know, directing the regulators across all sectors now, and the cross cutting regulators across multiple sectors to really be thinking about how they’re promoting growth in their role as regulators. And so that ties back into that kind of pro-innovation approach. And probably why we’re not seeing the type of regulation we are in the EU in terms of the implications. As I said, there’s definitely a tension there particularly in the US. You think about the the size and scale of some of the technology businesses in the US. And this desire to really promote the US as a center for AI and technology. And again, it wouldn’t be unexpected to see some of those businesses and the AI systems they develop fall into scope of the EU AI act.
And so it’s going to be interesting to see how how that works and where the EU almost stands its ground in terms of its regulatory approach, which it has indicated it will do and has done so far. So yeah. The only other argument on the implications is, you know how regulation can keep up, or if it’s a deregulated approach, how that keeps up, you know, technology moves very, very quickly. We saw that with generative AI. If you look at the way that the AI act was drafted, it was actually the drafting began before generative AI was a known concept.
And you can kind of tell that through the way that the Act’s been drafted there are specific sections on general purpose, AI, as it calls it, and it takes a very different approach to the, to the rest of the act, but it just gives you an example of how difficult it is for regulation to keep up with the the pace of change for technology. So yeah, I mean, in terms of both approaches it remains to be seen what it means around attracting businesses and trying to stimulate growth of those businesses and promote growth across the economy. So, yeah, it’s a bit of a watch this space.
Jean Hurley: Thank you. I mean, so if you, as you said, the EU AI Act is the landmark piece of legislation on AI. So organizations will have to look at it in detail. Can you break some? Break it down? Some of the key components? Thank you.
Matt Worsfold: Yeah, absolutely, absolutely. So, as I said, the the fundamental premise of the AI act falls upon some kind of key processes. So on the first hand, it’s around identification of AI systems. And the way that the premise is essentially you need to go and do discovery and almost back out every AI system. And it’s an interesting nuance because it doesn’t talk about systems from a technological standpoint. It talks about systems in a use case standpoint. And so this is actually really about mapping your use cases.
And that’s important because of the way that the risk assessment works. So yeah, as I said, first up identifying AI systems and identifying the use cases that those AI systems support. Second step under the act is then risk assessing them. And then, third, it’s identifying the controls you’ve got to put in place in order to meet some of those obligations that fall off the back of the AI Act, and that’s around ensuring that they manage and mitigate the risks worth noting, that the AI act is the purpose of the act in itself is to secure and protect the rights, freedoms, and individuals of, sorry, rights, freedoms, and safety of individuals.
And so again, for businesses, it’s a very different lens to the way that they may have been looking at AI. If you think about more kind of commercial approaches to risk management, so it starts to add some context as to how the act is written in terms of delving into a bit more detail the scoping of an AI system under the act actually gets quite complex. There are some four components to think about. So firstly, for an AI system to fall into scope of the AI Act, it’s got to meet the definition of an AI system under the act, and the definition is quite prescriptive. There are multiple component parts to that definition.
And so businesses need to take a look at their system and go. Does this system fulfill all of the criteria almost under the definition of that? And that’s almost your stage gate, one for working out whether or not an AI systems in scope.
The second, then, is looking at the role that you play in the AI system supply chain. So did you build the AI, and therefore you fall under the developer role. Are you implementing AI into your business? In which case you might fall under the deployer role. There are others. But it’s really key to understand what your role is as a business for that particular AI system, and that will likely change over time potentially as well. But that’s important. Because then that maps your compliance requirements.
And then, finally, that extraterritorial understanding, whether you fall into the extraterritorial scope of of the act which is essentially is, does does your AI system that you’ve either built, deployed, or imported, or distributed? Does it touch an EU citizen in any way? And if the answer to that question is, yes, you’re likely to be in, and that’s quite challenging for UK businesses in particular, that you know, understanding the touch points with the EU, and whether you fall in or fall out so, and that all needs to happen, that scoping exercise, all needs to happen before you then get onto the risk, assessment and part and within the act again, it’s very prescriptive on how that risk assessment needs to take place. There are predefined almost in the Act. High risk use cases also prohibited risk use cases. And it’s about mapping your AI system in the use case to those lists and saying, Do they fall under either of those two?
For anything under prohibited risk? That means taking it out of the supply chain immediately. And some examples of prohibitive practices are those that are things like social scoring, scraping facial images to produce training data, anything kind of subliminal in terms of those subliminal techniques, trying to get people to behave in certain ways, vote in certain ways. For example, those are all prohibited.
And then you’ve got high risk. Use cases as well that have set of quite onerous obligations off the back. So there’s there’s quite a lot in there. It really benefits to try and take it a bit, step by step. What are the stage gates? Where might you scope a system in and out, and then what’s the risk assessment? And then there’s a whole section on a general purpose. AI, which there’s still some clarification needed on. You know how that works and what that means, but that’s that’s coming
Jean Hurley: That’s really helpful. Thanks, Matt. So as you said, there’s gonna be more coming in August. I mean, what are the latest documents that are coming from the EU in relation to the act?
Matt Worsfold: So back in February there were some really useful guides published by European Commission. They focused largely on two things, one around the definition of an AI system. And that’s where they broke down the definition of an AI system into its component parts and explained you know, in some more detail, with examples. You know what would be considered in or out under that definition and scoping.
And also providing more clarity on prohibited AI practices. And that’s purely, I think, because the deadline for taking prohibited systems out of service was back in February. So they were kind of going right. This is our first priority we’re going to be seeing, I think, quite soon. They issued their third draft just recently, but we’re going to be seeing some guidance and a code of practice around general purpose AI. That should give us a better feel for what falls into the definition of general purpose AI. How the obligations apply to the supply chain when it comes to general purpose AI. And again, that’s going to be useful because the provisions on general purpose AI kick in in August 2025, so we’ll see more, I suspect. At some point we’ll see some more guidance on irisk use cases once more. Work’s been done down the track. But but that’s really the useful guys that are coming out at the moment when it comes to the the AI App.
Jean Hurley: Thank you. So one of the things I know businesses have to do is to assess the level of AI literacy. Do you have any tips for them, or any advice?
Matt Worsfold: Yeah, so yeah, so under Article 4 of their act, it imposes these requirements around AI literacy. And it notably requires developers. So those who are building and deployers those who are implementing of AI systems to ensure, and they call it a sufficient level of AI literacy for staff, and so these kicked in in February. They’re already live. And so firms who are within scope of the AI act need to be complying with this already.
The way in which firms need to be thinking about their AI literacy is almost as a program of work rather than a one-off training exercise. It’s not really sufficient in this example to be thinking about it as a kind of annual training program or a one-off, you know, we’ll train people on the risks of AI. It needs to be a number one, an ongoing, living, breathing program, and it also needs to be tailored across different stakeholders, different stakeholder groups. And that’s about breaking down the roles. You know, those will be interacting or interfacing with AI and then tailoring that literacy program accordingly based on you know what that particular stakeholder might be doing with AI, the particular risks they need to be considering, the legal requirements and obligations that they might need to face into.
So for example, that might mean that for boards and senior executives it’s a very different level and type of training around their roles as either a key mechanism from a governance or a leadership or an oversight perspective versus, you know, training that you would deliver to your cyber teams or your technology teams very different sets of content lenses framing. And so that’s the way that the program should be constructed and and tailored. Accordingly.
Jean Hurley: Well, that’s a huge amount of work. So if we turn to the UK particularly in the realm of financial services, could you tell us more about that. I know you recently attended the FCA AI sprint. So it’d be really interesting to hear about that. Thanks.
Matt Worsfold: Yeah, absolutely so. As I mentioned, the UK Government’s indicating it’s only likely to be regulating those most pathway models, so essentially generative AI.
And then, as I said, leaving it up to the sector specific regulators to define their approach to regulating AI. I know the FCA have been doing a lot of work. Not just in the last couple of years, but beyond around, you know the use of technology, how that plays into their financial services sector from a, you know what benefits can be gleaned. But how do they protect, for example, customers and the rights of customers? And so they’ve been thinking about this for for a long time. It’s important to go back to the FCA’s core principles that they talk about a lot when they talk about their approach to regulating the financial services sector. And so that’s being outcomes focus number one.
They talk about being technology agnostic, number two, and the last one, which I don’t think is necessarily listed specifically, but that pro innovation approach. They want to stimulate growth. They want financial services, firms to safely be implementing AI to be able to glean the benefits ultimately for those firms, for the functioning of the markets for customers. And so you know, in the world of AI and financial services it points a little bit, and this is more guesswork from myself rather than any kind of published stance or viewpoint, but it points more to a similar approach that the Government are taking in, that we we may not see specific regulation from the FCA. On the use of AI. But instead, see the FCA, you know, fall back on the existing regulatory framework they have in place.
But again pure, pure guesswork. At this point I don’t think anyone quite knows but it’d be interesting to see how that plays out. I mean the the AI sprint you referenced was a really interesting few days, it is amazing to see, you know, such a varied group of industry stakeholders come together from, you know, different organizations, different firms, different stances and viewpoints, and you know, really come together to discuss the role that AI has to play in FS types of use cases where we can see benefit and then have a really robust discussion around what the FCA needs to do in order to give, you know, great in order to drive, sorry, greater adoption, but also make sure that those risks that I mentioned were being balanced. And so yeah, it was just, it was quite interesting to hear the various perspectives, and it’ll be interesting to see where the FCA goes next.
Jean Hurley: Great. So did you learn from these other firms that you were with? I mean, what did you see? How do you see financial services, firms using AI in practice and the implications and challenges for them, I mean, did they share anything like that with you?
Matt Worsfold: Yeah, some really interesting insights, really interesting insights. I think one of the big takeaways from the three days was that the vast majority of AI that’s being adopted in financial services at the moment is kind of what could be termed almost everyday AI. It’s back office AI, and what I mean by that is, it’s AI that’s being used to automate tasks or processes and generate efficiencies. And that’s really been the core focus around. Well, how do we automate the lower value tasks that might have to be performed in some of those back office functions and potentially in the middle office, too. I think there are an increasing number of use cases in front office functions that we’ll talk through.
We also heard about some use cases. And again, this is AI that has been within the field of financial crime and fraud for a while, but those use cases emerging and evolving and maturing a little bit. And so that was quite an interesting one around the use of AI to try and manage risks essentially, but the one thing I think we did here was that a lot of FS firms are facing into some significant challenges in trying to deploy. Well, sorry, firstly, deploy, but second, scale AI to the point where they’re seeing those really significant benefits that we’d expect to see. And you know quite often that comes down to an inability to integrate AI into processes or systems.
And not quite often, as we know, with a lot of these FS firms, there’s a lot of legacy technology. Which is a big, been a big challenge historically, anyway. And then when you overlay the intention to integrate AI, it becomes an even more significant challenge, but also hampered by the lack of access to large volumes of really high quality data that can be used to train those models, to customize, to make them quite specific, to produce accurate outcomes, behaviors intended and fundamentally, you know, having high quality data that doesn’t introduce risk or regulatory issues actually, under the existing regulatory framework like consumer duty, which is obviously a a big one. So yeah, that’s that’s where the big, the big focus is. And again, high quality data has been a big challenge for financial services, firms for for a long time. So not a new challenge, but a very different context. And so again, you know, it’s another imperative for firms to start focusing on data governance, for example, and investing in data programs.
Jean Hurley: So that’s really interesting. Thank you. Would you say that firms are trying to develop their own products, or are they buying it in in general?
Matt Worsfold: I think a much greater trend towards buying buying in particularly when you think about generative AI. You know the ability of firms to build their own LLMs is practically non-existent. Right? It sits with those really large tech firms. And so a lot of the time we’re seeing that kind of build being applied to more bespoke smaller use cases where people are building in house, but far more now around that kind of procurement and buy of technology into the organization to try and really accelerate. I guess the adoption of AI, and that’s just around trying to adopt the latest technologies, and trying to bring generative AI into firms which which then leads to that more of the buy rather than the build
Jean Hurley: Great. So you said, like the FCA’s approach to regulating AI, they’re going to. It’s going to be done through existing rules. Do you think this is the right approach?
Matt Worsfold: There’s probably arguments both both ways. I think some firms would feel like they would benefit from having more clarity on the regulation of AI, more prescriptive rules to be able to follow. It’s a complex space. It’s a complex technology. And so, as I said, I can imagine some firms are probably thinking what we can actually do with a bit more prescriptive regulation on AI. There will definitely be other firms who think that actually not regulating specifically is the way to go. It’s gonna hamper innovation. It’s gonna hamper our ability to develop new new use case. And all we really need is guidance, codes of practice, for example, or more industry, consultation, or consultation with other stakeholders. So I can see the arguments both ways.
I mean, I said before, the the concern around regulation of of any technology is always that concern around, that the technology outpaces the regulatory change. Or that ends up being counterproductive, or adding more regulatory burden. And so again, I kind of go back to that other driver at the moment, which is that kind of pro growth, pro innovation. And so I can’t imagine that there’s a huge appetite to to add more regulatory burden onto firms in the context of growth. But, as I say, I can see the arguments both ways.
It was also quite interesting to see recently the joint letter from the ICO and FCA essentially asking trade associations on more about the barriers to AI adoption. You know what’s hampering them? How come they’re not harnessing the benefits that they’re envisaging for financial services, and essentially what both regulators can do to help. So that goes a long way, I think, for firms to be able to, you know, express views, and let the Regulators know well, this is actually what we would benefit from as an industry which I think will also is what the AI sprint was designed to do as well to keep the conversation flowing, actually.
Jean Hurley: So how about the FCA. I mean, they’ve told to you know, promote growth and innovation. Do you think it would be helpful for them if they utilized AI in supervision. And how could you imagine that happening if they did
Matt Worsfold: Yeah, absolutely. I think across any of their functions. There’s there’s benefits to be gained from using, utilizing AI if you look at a lot of the publications that have come out. It’s quite clear to see the FCA, like many other businesses, are investing quite heavily in AI capabilities. In the last AI update it talked about the investments. There was making capability in people, for example, so it wouldn’t be a surprise to see or hear of, you know, use cases. AI use cases in the way that it functions as a regulator across its various roles that it that it plays and and it’s also using AI and and tech to drive more innovation. As I say, we’ve got these tech sprints that the AI sprint fell under.
They’re launching things like synthetic data programs to support firms with safe and secure building of AI and new technologies as well. They’ve got regulatory sandboxes. And so it’s clear, you know, over and above, for example, supervision. They’re also looking at. Well, how do we actually enable and support firms to be able to grow, their capabilities develop AI safely. And that’s a key role that the FCA. Can play. Moving forwards
Jean Hurley: Great. So, looking at the firms themselves, what considerations do they have to think about when it comes to AI compliance?
Jean Hurley: And perhaps you know who does it sit with? Is it the senior management function? You know it’d be great to hear your views on this. Thank you.
Matt Worsfold: Absolutely so key considerations. One is just getting a good handle on AI use cases, you know, mapping your AI inventory. Not a simple task, as you said and alluded to. You know, machine learning has been around for a long time and generative. AI now means that there are more use cases than really ever before, and they can be developed at speed. So getting a good handle on that AI inventory both from a systems and use case perspective is absolutely critical here, because unless you understand your AI universe, you can’t then risk assess it. You can’t then understand which regulatory regimes you might fall under, and therefore what your obligations are before you can even think about complying with them. So that’s absolutely critical. Other considerations developing our governance frameworks. And that doesn’t mean creating brand new governance frameworks. It means existing and leveraging existing governing framework governance frameworks like data, governance frameworks or similar.
But thinking about it from a top-down perspective. So how does that governance framework for AI set the tone of around what’s acceptable to the business, the way in which AI can be used, the way I should be risk managed and governed again. That’s a really critical part in helping your organization, you know, roll out and scale AI and and do that in a way that is is compliant fundamentally. And it’s just kind of adding that wrapper around to support it.
In terms of accountabilities again, multiple arguments, multiple arguments. But you know I liken the same way that I think about it with cyber risk, for example, which you know, everyone kind of points to the CISO and goes well, cyber is the CISO’s problem. Well, actually, no, the risk for cyber sits across the organization and should be owned from a risk management perspective by multiple stakeholders. Because multiple stakeholders have an impact or an input on the management of cyber risk. And AI is identical.
It’s tempting to think of the responsibility for AI sitting with a DPO or a CISO or a CTO, or even a CDO. But the reality is that because it’s being scaled so widely across a business, everybody needs to take a bit of ownership when it comes to the risk management of AI and AI risk in itself as a domain is just so, so broad. There are so many sub risks that fall under almost the header category of of AI risk. And that’s why it’s about mapping out. What are the risks? How do they impact different parts of the business? And then what each of the senior stakeholders who may have SMCR, you know, roles and responsibilities, for example, what’s their responsibility when it comes to AI? They can’t be expected to own it all, but they’ll own a constituent part. It’s about being really clear on what that constituent part is for that particular stakeholder
Jean Hurley: Oh, absolutely. Yeah, I can see. So it could be under HR. A little bit. And then data, finance, yeah, the full gamut?
Matt Worsfold: Yes. Yeah.
Jean Hurley: So a broader question should be if we’re going to be in a society where AI is running everything, can we really trust AI? And how do we build confidence in the public and these technologies?
Matt Worsfold: Hmm, so it’s a really interesting question. I don’t think at the moment there is a significant level of trust in, and I think we’re a long way from truly trusting AI as a either a concept or a system as a framework called the trifecta of trust. And it talks about, you know, the building of trust is premised on having consistency, authenticity, and connection. And if you map those three things to an AI system, it feels like we’re a long, long way off from an AI system really being able to demonstrate any of those three things. You know, you think about consistency, for example, and the issue around hallucinations.
We’ve got authenticity, and we see a lot of disinformation, the ability to produce deep fakes. And so if you take it in the context of those three things. I think we’re a long way away from being able to fully trust AI where we place, you know, really significant reliance on it. And I think until we can demonstrate through, you know, testing really sound risk management, you know, really sound governance.
You know, I feel like it’s going to be quite a way until we get there. That’s not to say that we won’t trust AI. I think, in very specific or particular scenarios. As I say, where all of those things around testing and risk management have been applied, and there are narrower use cases. And it’s in a much more controlled environment. But you know, broader, more kind of existential trust in AI, I think, is a long way away. People are quite skeptical in general, I think that’s human nature. So yeah, I think that plays into.
Jean Hurley: Thank you. So any final words of advice for businesses navigating this space?
Matt Worsfold: I’d say, stay informed to the best that we can. It’s challenging. Technology is evolving quickly. But again try and stay as informed as possible on new advancements. New regulations keep an eye on geopolitics, and how that plays into the views on AI. You know, what are technology firms doing? What are lawmakers doing? Say, how’s the technology changing? And then what does that mean for you as a business or an individual
For businesses in particular, I think, define the golden rules is probably the one thing I would say. Define the rules of the road when it comes to AI map that back to your strategy and what you’re trying to achieve, but almost set your kind of overarching risk appetite statement, if I can call that in risk management terms. But what are your golden rules? What are you willing to do with AI? What are you not willing to do? What are the risks you’re willing to take? Which risks are non-negotiable.
And that’s the overarching set of principles that the organization can then abide by in driving the kind of use and adoption of AI, and then finally invest in good data governance at the outset. It’s been important anyway. But, as I say, with the drive of of for the adoption of AI data governance and high quality data becomes really, really one of the most important factors for successful AI adoption. And so people should really be considering about how they’re investing sufficiently in that
Jean Hurley: Great advice. Thank you. So we’ve just got a couple of minutes left for our end of conversation snap, yes or no section. So it’s not nothing tricky. It’s just yes or no. So are you ready, Matt?
Matt Worsfold: Absolutely.
Jean Hurley: Will AI regulation become more globally harmonized in the next five years? Yes or no.
Matt Worsfold: I’m gonna say no.
Jean Hurley: Are businesses currently overestimating the risks of AI regulation?
Matt Worsfold: I’m gonna say, yes.
Jean Hurley: Should AI models be granted legal personhood in the future?
Matt Worsfold: This one for me is a definite no.
Jean Hurley: Is it possible to fully eliminate bias in Aa AI algorithms?
Matt Worsfold: Also going to say no.
Jean Hurley: And finally, will AI ultimately create more jobs than it displaces?
Matt Worsfold: In the long term, yes.
Jean Hurley: Thank you, Matt.
Jean Hurley: Matt, it’s been great having you on the GRIP Podcast. Thank you for your time and sharing your insight.
Matt Worsfold: Thank you so much for having me. Thank you. Appreciate it.
Jean Hurley: And finally thank you to our listeners. If you’re hearing this, you probably know about us. But please tell your friends about GRIP. You can find us at grip.globalrelay.com. And you can follow us on Linkedin until our next podcast or article I bid you farewell. Thank you.