The UK Government’s long-awaited response to last year’s consultation on AI regulation confirms what we already knew – that the current administration is averse to regulating AI specifically and will devolve responsibility to existing regulators. It takes the view that it’s too early to legislate and that AI use has to be looked at in context. The next major flurry of activity is in the spring, including the regulators outlining their strategic approach to AI, and updated guidance on the use of AI in HR and recruitment. Watch this space.
What’s the background?
Unlike the European Union, where approval of an EU AI Act is in the final stages, the UK favours a “wait and see” approach.
It believes that AI regulation should be devolved to existing sector regulators to create bespoke measures tailored to the needs and risks posed by different parts of the economy.
The UK hosted the first global AI Safety Summit in November and is clearly positioning itself to be a global leader, with ambitions to lead on safe AI and be a science and technology superpower by the end of the decade.
UK AI regulation: Key points
- The overall approach is pro-innovation and pro-safety – combining cross-sectoral principles and a context-specific framework, international leadership and collaboration, and voluntary (for now) measures on developers.
- There’s new guidance to support regulators to implement the principles effectively. Key regulators have until April 30, 2024, to publish an update outlining their strategic approach to AI. There seems to be an acceptance by the Government that some regulators are much further along the road than others in terms of understanding how AI may affect the sectors and areas they regulate, and that significant resource will be needed to provide the tools for managing their obligations effectively.
- This context-based approach may miss significant risks posed by highly capable general-purpose AI systems. The Government lays out the case for a set of targeted, binding requirements on developers in the future to ensure that powerful, sophisticated AI develops in a safe way. But it will only legislate when confident that’s the right thing to do.
- There’s a central function to drive coherence in the regulatory approach across government. A steering committee with government representatives and key regulators to support knowledge exchange and coordination on AI governance will be set up by spring 2024.
- £100m/$126m is being invested to support AI innovation and regulation, including £80m/$101m to launch nine new AI research hubs, and £10m/$12.6m for regulators to develop the capabilities and tools they need.
- The Digital Regulation and Cooperation Forum has shared the eligibility criteria for the support to be offered by the AI and Digital Hub pilot, which is launching in spring 2024.
- There will be updated guidance in spring 2024 to ensure the use of AI in HR and recruitment is safe, responsible, and fair. We urgently need more clarity from the Information Commissioner on the practicalities of how to reconcile the obvious conflicts between how AI operates and data protection principles.
- A working group convened by the UK Intellectual Property Office on the interaction between copyright and AI was unable to agree an effective voluntary code. Ministers will now lead a period of engagement with the AI and rights holder sectors. This is unfortunate as there was an opportunity to provide certainty. Understanding how models can be trained on data that includes third-party IP is critical for businesses to have the confidence to develop AI tools and use them. The House of Lords in their recent report on AI identified the uncertainty around IP use in the UK as being a blocker for the development of AI businesses in the UK.
- A call for views in spring 2024 will gather further input on next steps in securing AI models. This includes a potential code of practice for AI cyber security.
- Work to analyse the life cycle accountability for AI is ongoing.
- The UK is committed to establishing enduring international collaboration on AI safety. Domestic and international approaches must develop in tandem. The UK’s AI Safety Institute will partner with other countries to facilitate collaboration between governments on AI safety testing and governance, and develop their own capability.
- Next steps are set out in an AI regulation roadmap at 5.4 of the white paper response (See GRIP Fact Box after author details).
Sally Mewies leads Walker Morris‘ Technology Group and has over 30 years’ experience helping clients with the licensing and acquisition of all types of technology.
GRIP Fact Box: UK Government AI regulation roadmap highlights
- Progress action to promote AI opportunities and tackle AI risks by:
- From Spring onwards, conduct targeted engagement on UK cross-economy AI risk register with plans to assess the regulatory framework.
- In Spring, release a call for views to obtain further input on our next steps in securing AI models, including a potential Code of Practice for cyber security of AI, based on NCSC’s guidelines.
- Establish a new international dialogue to defend democracy and address shared risks related ahead of the next AI Safety Summit.
- Launch a call for evidence on AI-related risks to trust in information such as deepfakes.
- Explore mechanisms for providing greater transparency.
- During course of the year: phase in the mandatory requirement for central government departments to use the Algorithmic Transparency Recording Standard (ATRS).
- Build out the central function and support regulators by:
- Launch a new £10m ($12.7m) programme to support regulators to identify risks in their domain, develop skillsets and approaches to AI.
- In Spring, establish a steering committee to support and guide the activities of a formal regulator coordination structure within government.
- April 30, ask key regulators to publish updates on their strategic approach to AI.
- Updates in summer: collaborate with regulators to iterate and expand the government’s initial cross-sectoral guidance on implementing the principles.
- Continue to develop our domestic policy position on AI regulation by:
- In the summer: engage with a range of experts on interventions for highly capable AI systems.
- By the end of the year: publish an update on our work on new responsibilities for developers of highly capable general-purpose AI systems.
- Ongoing basis: collaborate across government and with regulators to analyse and review potential gaps in existing regulatory powers and remits.
- Ongoing basis: work closely with the AI Safety Institute, which will provide foundational insights to our central AI risk assessment activities and inform our approach to AI regulation.
- Encourage effective AI adoption and provide support for industry, innovators, and employees by:
- In Spring: Launch the pilot AI and Digital Hub with the DRCF.
- In Spring: Publish an Introduction to AI Assurance.
- In Spring: Publish updated guidance on the use of AI within HR and recruitment.
- In Spring: Publish a full AI skills framework that incorporates feedback to our consultation and supports employers, employees, and training providers to identify upskilling routes for AI.
- By end of the year: Launch the AI Management Essentials scheme to set a minimum good practice standard for companies selling AI products and services.
- By end of the year: Publish an update on our emerging processes guide.
- Support international collaboration on AI governance by:
- Action our newly announced £9m/$11.4m partnership with the US on responsible AI as part of the DSIT International Science Partnerships Fund.
- In Spring: Publish the first iteration of the International Report on the Science of AI Safety.
- Ongoing basis: Share new knowledge with international partners through the AI Safety Institute.
- Ongoing basis: support the Republic of Korea and France on the next AI Safety Summits and consider the possible role of AI Safety Summits beyond these.
- Ongoing basis: Continue bilateral and multilateral partnerships on AI, including the G7, G20, Council of Europe, OECD, United Nations, and GPAI.