Summit leaves question of AI regulation hanging

Elon Musk took center stage to discuss regulation, the role of China, and the future of work.

Last week’s AI Safety Summit brought together 150 top level government and industry representatives. It was essentially a bold diplomatic and PR move, resulting in the Bletchley Declaration, signed by China, the US, the EU, and other major powers.

The stated aim of the gathering was collaboration and ensuring the safe development of AI. The summit came in the same week as an Executive Order from President Biden pledging to take action to pursue bipartisan legislation on ensuring responsible innovation.

Musk on risk and regulation

If the summit was a performance aimed at boosting the UK’s status in AI, Elon Musk was the star act. Musk has become something of a prophet for the AI era and in an hour-long interview with British Prime Minister Rishi Sunak directly following the event, he talked about deepfakes, ChatGPT, and other disruptive developments.

“The potential is there for AI to have most likely a positive effect and create a future of abundance where there is no scarcity of goods and services,” Musk said.

“I agree with the vast majority of regulations. There are very few – less than 1%- I actually disagree with.”

Elon Musk

But he said government intervention was necessary when public safety was at risk, whether in aviation, cars, aerospace, or indeed AI. “I agree with the vast majority of regulations. There are very few – less than 1%- I actually disagree with,” Musk said. “There is some concern from people in Silicon Valley who have never dealt with regulators before who think this will crush innovation and slow them down and be annoying. It will be annoying, but we’ve learned over the years that having a referee is a good thing. At times there might be too much optimism about technology.”

China was named alongside San Francisco and London as an AI employment and innovation hub. “The single biggest objection I get to AI regulations and safety controls is – China isn’t going to do it and therefore they will jump into the lead and exceed us all,” Musk added. Sunak and Musk agreed China’s inclusion in the summit was critical.

Asked by Sunak how regulatory change can keep pace with the latest AI developments, Musk kept his answers vague, saying that although AI will grow exponentially in the coming years, “even if there isn’t an enforcement capability, we will have an insight into AI’s capabilities”.

Future of work

The future of work is a discussion increasingly taking place across all sectors. “AI will be a leveller and an equalizer. We will essentially have access to a magic genie that will be the best and most patient tutor. Computers will happily take on the jobs that are dangerous and tedious. We won’t have a universal basic income, we’ll have a universal high income,” Musk said.

Another heavy hitter in the AI world, Sam Altman, CEO of OpenAI, said in a recent WSJ interview: “It’s not enough to just give people a universal basic income. People need to have agency and the ability to influence this. We need to be joint architects of the future.”

Leaders in banking and finance take a similar view, seeing AI as an opportunity to free up human capital and creativity.

“If we can take manual work out of banking, we can free up human capital to work on exciting things. Let machines do the ordinary and humans do the extraordinary,” Ian Stuart, CEO, HSBC, said, speaking at the Money 20/20 fintech summit in Amsterdam in June.

Generative AI and productivity

But AI skeptics say the age could usher in mass unemployment. A McKinsey study found that Generative AI has the potential to increase US labor productivity by 0.5 to 0.9 percentage points annually through 2030 “in a midpoint adoption scenario”. But millions of jobs in sales, admin, and production will cease to exist. “We estimate that 11.8 million workers currently in occupations with shrinking demand may need to move into different lines of work by 2030. Roughly nine million of them may wind up moving into different occupational categories altogether,” the report says.

Fears of the singularity – that hypothetical future point at which technological growth becomes uncontrollable and irreversible – date back decades, with wide varying predictions for when it could occur.

Speaking about deepfakes, Altman said: “This is speculation – maybe not the deepfakeability but the customizeable persuasion is where the influence happens. Not the fake image, but the subtle ability to influence people. We think international regulation is going to be important for the most powerful models – nothing that exists today or will exist next year.

“But as we get towards a real superintelligence and a system more capable than any human, it’s reasonable to say we need to treat that with caution and a unified approach. Let’s look forward to where this might go and not be caught out.” He stressed however that a regulatory response right now that restricted open source would be disastrous for the US and globally.