Global debate on AI regulation gathers pace with call for international panel

Eric Schmidt and Mustafa Suleyman call for AI version of climate body IPCC as policy debates in US and UK hot up.

Deal with AI in the same way we are attempting to deal with climate change. That’s the message from former Google chief executive Eric Schmidt and Deepmind founder Mustafa Suleyman.

In an opinion piece for the Financial Times (£), they propose an “expert-led body empowered to objectively inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming”. And they cite the inspiration for the ideas as the Intergovernmental Panel on Climate Change (IPCC).

With an eye on the forthcoming international AI safety summit at Bletchley Park in the UK, the pair propose establishing an International Panel on AI Safety (IPAIS). Its aim would be to “regularly and impartially evaluate the state of AI” and, like the IPCC, act “as a central hub that gathers the science” rather than do its own research.

Independent, scientific consensus

Schmidt and Suleyman’s proposal majors on independence and authority. They suggest the body is staffed and led “by computer scientists and researchers rather than political appointees or diplomats”. Their view is that by staying clear of primary research or policy, conflicts of interest can be avoided. This would also provide the transparency necessary to build trust that the body’s work was not unduly influenced by the leading companies in the field.

They conclude that “establishing an independent, scientific consensus about what capabilities have been developed, and what’s coming, is essential in developing safe AI”.

Concern about the influence of big players in the field was raised earlier this year when Time published a story showing how Open AI lobbied for major parts of the EU AI Act to be watered down so that the regulatory burden on the company was reduced. It argued, successfully, that its general-purpose AI systems should not be classified as “high-risk” and so therefore not subject to tight legal requirements.

“We need to focus on things that can do us more harm today.”

Wendy Hall, regius professor of computer science, University of Southampton

Meanwhile the UK’s efforts to position itself as a leader in the field ahead of the Bletchley Park summit have come under fire. Criticism of the government’s strategy came from experts and industry figures giving evidence to the House of Lords Communications Committee, with some expressing the view that the UK was falling behind the US, China and Singapore.

Wendy Hall, regius professor of computer science at the University of Southampton, said many of the AI safety risks being examined were “hypothetical” and that there were more immediate risks posed by other forms of technology. “We need to focus on things that can do us more harm today,” she said. “This isn’t as important as regulating facial recognition.”

She also questioned whether there was any kind of functioning UK strategy on AI, despite government claims to be leading the way. She referred to the AI Council and the Centre for Data Ethics and Innovation’s advisory board being disbanded and said she no longer receives updates on what is happening on skills.

Tortoise Index

“I worry, because as a nation, I think we are slipping,” she said, referring to the country dropping from third to fourth in the Tortoise Index that measures progress in AI development, infrastructure and training.

Hall wants to see greater focus on training people to prepare for AI automating jobs, for example by providing new career routes, but said: “I cannot tell you if there is a functioning AI strategy”.

Others who gave evidence also spoke of a “fragmented” strategy. A big problem for the UK is that a lot of technological advance has its roots in the US, which means AI businesses are turning to US regulators first. And concern was raised that the UK was giving public sector data – vast amounts of which is needed to train AI models – away to dominant US companies.

“We need to learn how to use that data,” Hall said. “Why are we giving it away to the US? We need to learn to use it to generate good stuff ourselves.”

Google and Microsoft

Another problem is the acquisition of UK startups by US firms, something fuelled by the focus in the UK on making a quick return. And the funding firms such as Google and Microsoft can draw on to fund high-performance computing is difficult for the UK government to match.

Last autumn, the White House unveiled a Blueprint for an AI Bill of Rights that had five key principles that should “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence”;

  • safe and effective systems;
  • algorithmic discrimination protections;
  • data privacy;
  • notice and explanation;
  • human alternatives, consideration and fallback.

Last week, a group of lawmakers from the US Senate and House of representatives, led by Senator Edward J Markey and Representative Pramila Jayapal and including Senator Elizabeth Warren, wrote to President Biden urging the implementation of the Bill of Rights in an upcoming executive order.

The call echoed a plea made by 60 civil, technology, labor, consumer, transparency, accountability, and human rights groups in September, urging the Biden-Harris adminisitration to “make the White House Blueprint for an AI Bill of Rights (AI Bill of Rights) binding US government policy for the federal government’s use of AI systems in the forthcoming AI Executive Order (AI EO)”.

Fiduciary duty rules

But there’s unrest in the US too about the regulatory approach in some areas. Jack Inglis, chief executive of the Alternative Investment Management Association, aired some trenchant criticism in an opinion piece published in the Financial Times (£). His beef is with the SEC’s latest proposals on the use of predictive data analytics by investment advisers and broker-dealers.

Inglis says the definition of technology in new rules designed to eliminate conflicts of interest arising from the use of technology is too wide. It classifies cutting-edge AI in the same way as spreadsheets, he argues. And in doing so places a huge and unnecessary burden on investment companies.

And he goes further, asserting that the SEC misunderstands how the investment industry relies on technology, and in doing so “proposes to rewrite the established fiduciary duty rules between clients and their investment advisers”. He says “existing obligations already prioritise clients’ interests via the longstanding fiduciary duty that governs the client-adviser relationship” and expresses concern about “the SEC’s statutory authority that it invokes to promulgate the rule”.