The UK government’s approach to regulating AI has been challenged by one of the ruling Conservative Party’s own peers in the House of Lords. Lord Christopher Holmes has tabled a private member’s bill calling for an AI Authority to move regulation on from what Holmes calls the government’s “wait and see approach”.
Holmes believes a regulator does not need to take “a huge do-it-all” approach, but instead address the issue in a “horizontal rather than vertical fashion” in order to “give a better chance of alignment”.
“The role,” he says, “is one of coordination, assuring all the relevant, existing regulators address their obligations in relation to AI”.
Bletchley Park Summit
Interviewed by Computer Weekly, Holmes said that “wait and see is not an appropriate response”, adding that it was strange that there was so much discussion of the potential threat posed by AI at the Prime Minister’s AI Safety Summit in Bletchley Park last year, only for a largely voluntary approach to regulation to emerge.
While the government believes establishing a regulatory framework early risks stifling innovation, Holmes believes that “all the lessons from history demonstrate to us that if you have the right legislative framework, it’s a positive benefit, because investors and innovators know the environment they’re going into”.
Holmes wants AI regulation to be built on seven principles;
- trust;
- transparency;
- inclusion;
- innovation;
- interoperability;
- public engagement; and
- accountability.
He also advocates the use of sandboxes, saying on his blog: “We have seen the success of the fintech regulatory sandbox, replicated in well over 50 jurisdictions around the world. I believe a similar approach can be deployed in relation to AI developments and, if we get it right, it could become an export of itself.”
And he wants to see the introduction of “a general responsibility on every business developing, deploying, or using AI to have a designated AI officer”. This role would “ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business and to ensure, so far as reasonably practicable, that data used by the business in any AI technology is unbiased”.
Intellectual Property
He’s also keen that “all those who create and come up with ideas can be assured that their creations, their IP, their copyright is fully protected in this AI landscape”; and wants to see the regulator implement “a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI; and consults the general public as to the most effective frameworks for this engagement”.
Public engagement is key, he argues, because: “No matter how good the algorithm, the product, the solution, if no one is “buying it”, then, again, none of it is anything or gets us anywhere.”
The Bill went through its second reading in March and, while private member’s bills rarely become law, they can influence developments and test opinion. Holmes discussed his bill in some depth on The Privacy Adviser Podcast recently.