With the UK general election almost upon us, we consider the manifesto commitments made about artificial intelligence (AI) by the main political parties (the Conservatives, Labour, the Liberal Democrats, and the Green Party).
The UK’s current approach
The EU AI Act, which is due to enter into force this August, has been described as the world’s first comprehensive AI law. In contrast with the EU, the UK has not put in place AI-specific legislation and has not created a new AI regulator or authority to enforce AI-related laws. Instead, the approach to regulating AI in the UK is the “pro-innovation” approach favored by the current UK government.
In its white paper of March 2023, the UK government proposed that AI be regulated on a sectoral basis, with existing regulators using existing laws to regulate AI in accordance with five non-statutory principles and in a manner appropriate to the way in which AI is used in the relevant sector. In order to ensure consistency among the approach taken by different regulators, regulators were expected to apply the five cross-sectoral principles of:
- safety, security and robustness;
- appropriate transparency and explainability;
- fairness;
- accountability and governance;
- contestability and redress.
While the principles were established on a non-statutory basis, the UK government had said it would keep this approach under review and might, in time, introduce a statutory duty requiring regulators to have due regard to the five principles. It also set up a central function to drive coherence in its regulatory approach and to address regulatory gaps.
“It is clear that some mandatory measures will ultimately be required across all jurisdictions to address potential AI-related harms [and] ensure public safety.”
UK government, February 2024
In February 2024, in response to its March 2023 consultation on the proposed ‘pro-innovation’ approach (the Consultation Response), the UK government reiterated its support for regulation on a sectoral basis. However, the UK government also acknowledged that “the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured” and stated that “it is clear that some mandatory measures will ultimately be required across all jurisdictions to address potential AI-related harms [and] ensure public safety”.
At that stage, the UK government said that it would not legislate before ensuring it properly understood the risks and appropriate mitigations and it would legislate only when it was confident that it was the right thing to do.
In addition, the Consultation Response set out the case for a set of “targeted binding requirements on the small number of organisations developing highly capable general-purpose AI systems” to ensure that those organisations were accountable for developing such technologies in a way which is sufficiently safe.
In April 2024, there were reports that the UK government was already reconsidering the need for AI legislation, with the Department for Science, Innovation and Technology said to be actively canvassing ideas. However, it was unclear whether the legislation being considered was intended to be comprehensive or merely to target those organizations developing highly capable general-purpose AI systems.
Artificial Intelligence (Regulation) Bill
On May 10, 2024, the Artificial Intelligence (Regulation) Bill received its third reading in the House of Lords (having been introduced as a private members’ bill in the House of Lords in November 2023) and was due to go to the House of Commons for consideration. Amongst other things, the bill proposed the creation of a new AI authority to ensure that relevant regulators take account of AI, a number of regulatory principles to which the AI authority should have regard, and the requirement for anyone training AI to provide the new AI authority with a record of all third-party data and IP used in that training as well as confirmation that such use was based on informed consent.
On May 22, 2024, the UK Prime Minister, Rishi Sunak, announced that a general election would take place on July 4, 2024. Before an election can be run, Parliament must be dissolved and all business in Parliament comes to an end. As a result, MPs had just two days, until the end of May 24, 2024, to vote on and pass outstanding bills. The Artificial Intelligence (Regulation) Bill was not passed during this two-day period, which means that, in order to become law, it will need to be reintroduced in the next Parliament.
The UK’s approach to regulating AI is in its relatively early stages and, as such, may be subject to change if there is a change of government after the general election.
AI and copyright
We’ve discussed previously the interplay between AI and copyright, and the approach(es) proposed by the UK government to try to strike the right balance between rights holders and AI developers.
In March 2023, the UK Intellectual Property Office had been tasked with producing a code of practice to provide guidance to support AI firms to access copyright works as an input to their models. The intention was that, if an AI firm committed to the code of practice, it could have a reasonable licence offered by a rights holder in return. However, the UK government acknowledged that, if the code of practice was not agreed or adopted, legislation might be needed instead.
Almost a year later, the UK government confirmed in its Consultation Response that, although the IPO had formed a working group consisting of rights holders and AI developers on the interplay between copyright and AI, it was clear that the working group would not be able to agree an effective voluntary code.
The UK government continued by saying that ministers from the Department for Science, Innovation and Technology and the Department for Digital, Culture, Media & Sport would “lead a period of engagement” with rights holders and AI developers, to try to ensure a workable and effective approach that would allow “the AI and creative sectors to grow together in partnership”. It stated that this would include exploring ways to provide greater transparency so that rights holders could “better understand whether content they produce is used as an input into AI models”.
Despite stating in February 2024 that further proposals on the way forward would be set out “soon”, the UK government had not indicated its proposed approach by the time the general election was announced (in May 2024).
How might the approach to regulating AI change?
We’ve looked at the 2024 manifestos of the Conservatives, Labour, the Liberal Democrats, and the Green Party to see what each political party has said about their proposed approach to AI.
Conservative Party
The Conservatives’ manifesto does not comment on the current approach to AI and, accordingly, does not indicate whether the Conservatives intend to continue with this sectoral approach or are considering the introduction of AI-specific legislation.
Instead, the manifesto says that the Conservatives would, if re-elected, take a number of actions to secure the UK’s position as a “world leader in innovation”, including continuing to invest in large-scale compute clusters in order to:
- take advantage of the potential of AI; and
- support research into the safe and responsible use of AI.
According to the manifesto, the Conservatives would also:
- double the civil service’s digital and AI expertise to improve government efficiency and transform public services;
- establish a new medtech pathway to enable the rapid adoption of “cost-effective medtech”, including AI, throughout the NHS; and
- use AI to enable doctors and nurses to spend more time on frontline patient care.
In relation to AI and copyright, the “further proposals” promised in the Consultation Response have not been included in the manifesto. Instead, the manifesto just says that the Conservatives will ensure creators are “properly protected and remunerated for their work”, whilst “making the most” of the opportunities presented by AI.
Labour Party
Despite Labour leader Keir Starmer commenting recently that a Labour government would adopt a stronger approach to AI by introducing an overarching regulatory framework, and despite the Labour manifesto stating that regulators are “ill-equipped to deal with the dramatic development of new technologies”, the manifesto does not propose a complete departure from the current sectoral approach.
Instead, it lists three specific actions, stating that Labour will:
- create a new Regulatory Innovation Office, to help regulators update regulation and co-ordinate issues that span existing boundaries. This sounds similar to the central function established under the current sectoral approach;
- introduce binding regulation on “the handful of companies developing the most powerful AI models”. This sounds similar to the proposal for binding requirements on developers of highly capable general-purpose AI models set out in the Consultation Response; and
- ban the creation of sexually explicit deepfakes.
In addition, the manifesto says that Labour will ensure its industrial strategy supports the development of the AI sector, and will remove planning barriers to new data centres.
In relation to AI and healthcare, the manifesto says that Labour will:
- transform the speed and accuracy of diagnostic services by harnessing the power of technologies like AI;
- double the number of CT and MRI scanners, to enable the NHS to use state-of-the-art scanners with embedded AI to catch cancer and other conditions at an earlier stage; and
- develop an NHS innovation and adoption strategy, including driving faster regulatory approval for new technology and medicines.
The manifesto does not indicate how Labour would resolve the tension between AI developers and rights holders in relation to the use of copyright works for training AI.
Liberal Democrats
The Liberal Democrats said that on winning the election they would make the UK a world leader in “ethical, inclusive new technology”, including AI.
According to their manifesto, the Liberal Democrats would create a “clear, workable and well-resourced” cross-sectoral regulatory framework for AI that:
- promotes innovation;
- creates certainty for AI developers, users and investors;
- in relation to AI systems in the public sector, establishes transparency and accountability; and
- ensures the use of personal data and AI is transparent, accurate, unbiased and respects privacy.
It’s not clear from the reference to the ‘creation’ of a regulatory framework if the intention is to build on the ‘pro-innovation’ approach proposed by the government’s white paper in 2023 or to go back to the drawing board.
The manifesto also says that the Liberal Democrats would negotiate the UK’s participation in the EU-US Trade and Technology Council, so that the UK can play a “leading role” in global AI regulation. In addition, the Liberal Democrats intend to work with international partners in order to reach agreement on common AI standards in relation to risk and impact assessment, testing, monitoring and audit.
If elected, the Liberal Democrats would halt the use of facial recognition surveillance and would introduce a legally binding framework to regulate all forms of biometric surveillance, with the manifesto stating that facial recognition surveillance is most likely to wrongly identify black people and women.
There is no reference to the interplay between AI and copyright, but the manifesto states that the Liberal Democrats will support the creative industries across the UK and will support “modern and flexible patent, copyright and licensing rules”. It’s not entirely clear what this is intended to address.
Green Party
While the website version of the Green Party’s manifesto does not mention AI, the (downloadable) long version discusses what the Green Party would do (or, at least, what elected Greens would aim to do) in relation to AI.
Although the Green Party recognises AI’s “enormous potential for good”, its manifesto says that elected Greens would push for a “precautionary regulatory approach” to the harms and risks associated with AI. In particular, the Green Party would align the UK’s approach with that of Europe, UNESCO and global efforts to support a co-ordinated response to future AI-related risks. Compared with the manifestos for the other parties mentioned in this article, this is the only manifesto to discuss aligning the UK’s approach with that of Europe.
According to their manifesto, the Green Party would ensure that workers’ rights and interests were respected in the event of AI leading to “significant changes” in working conditions. This is the only manifesto (out of those examined for this article) to mention respecting workers’ rights and interests in the context of AI-related changes.
In addition, elected Greens would aim to secure equitable access to the “socially and environmentally responsible benefits” of AI, while also addressing any related discrimination, bias, privacy, equality or liberty issues.
In relation to the interplay between AI and copyright, the manifesto says that the Green Party would insist on the protection of the IP of creators and would ensure that AI does not “erode the value of human creativity”.
Conclusion
It remains to be seen who will form the new UK government. What we do know, however, is that it will be expected to set out a decisive course of action for governing AI, whether that’s continuing with or refining the current sectoral approach or introducing legislation instead.
At the end of May, the House of Commons Science, Innovation and Technology Select Committee published a report on the governance of AI, calling for the UK government to broker a “fair, sustainable solution” involving a licensing framework to govern the use of copyright works for training AI models. Once formed, the new UK government will be expected to respond to this recommendation and find a workable approach for rights holders and AI developers, whether voluntary or by introducing legislation.
John Enser is a partner and the TMIC Practice Group Leader. Sam Oustayiannis is a partner and Sarah Hopton is a senior associate in the media and technology team.