On the evening of Friday December 8, 2023, following what has been described as “marathon” talks, the Council presidency and the European Parliament’s negotiators reached a provisional agreement on the proposal on harmonized rules on artificial intelligence (AI). Until that date, there had been strong disagreement between the EU’s legislative bodies on what the Artificial Intelligence Act (EU AI Act) should look like.
If formally passed into law, the highly anticipated EU AI Act will set a global standard in the regulation of AI, a model on the basis of which other jurisdictions may seek to mould their own national rules. While the legislation still needs to be formally passed by each legislative body, businesses that operate in Europe now have a much clearer picture of what their compliance obligations will be once the EU AI Act enters into force. It is expected that the EU AI Act will be formally passed into law in early 2024.
How will the EU AI Act regulate business?
The EU AI Act maintains its distinctive classification of AI systems into three categories: (1) unacceptable-risk; (2) high-risk; and (3) low-risk.
1. Unacceptable risk
AI systems categorized as having unacceptable risk are prohibited outright. This will mean that no one may implement AI systems that perform the following functions:
- biometric categorization using sensitive personal data;
- untargeted compiling of facial images to create a database;
- manipulate human behavior to circumvent their free will;
- “social scoring”; and
- exploiting the vulnerabilities of people due to age, disability or social/economic situation.
However, a number of safeguards and narrow exceptions were agreed for the use of biometric identification systems in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and for strictly defined lists of crime.
2. High-risk
AI systems that fall into the “high-risk” category will need to comply with a range of obligations that span the length of the AI lifecycle. Examples of high-risk AI include:
- certain critical infrastructures for instance in the fields of water, gas and electricity;
- medical devices;
- systems to determine access to educational institutions or for recruiting people;
- certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes; and
- biometric identification, categorization and emotion recognition systems.
In terms of compliance, AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity.
Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Businesses will also need to prepare a “fundamental risks impact assessment” report. Most of the obligations fall upon the producer of the AI system, rather than the organisation using it.
3. Low-risk
AI systems that are low-risk need only comply with the duty to notify individuals when they are interacting with AI, and the overarching principles that apply to all AI systems.
In terms of governance, national competent market surveillance authorities will supervize the implementation of the new rules at national level, while the creation of a new European AI Office within the European Commission will ensure coordination at the European level.
Perhaps most significantly, the fines for non-compliance have been increased compared to earlier versions of the AI Act, now ranging up to 35 million euro ($38.3m) or 7% of global turnover, whichever is higher.
How can businesses prepare now?
- Deal due diligence: Financial investors, including those in the private equity and venture capital sector, will need to carefully review the activities of businesses using AI during the due diligence process. Buyers will need to test whether the activities of target businesses comply with the EU AI Act (and, if they are not compliant, understand how they will become compliant before agreements are reached and obtain any appropriate warranties as necessary).
- AI regulatory roadmap: Businesses that use AI will need to consider if their current or future use of AI is or will be compliant under the EU AI Act. For any AI that falls into the unacceptable or high-risk category, businesses should prepare a strategy to ensure the AI and its use will be compliant with the proposals. For example, for high-risk AI, businesses will need to prepare to handle potentially burdensome compliance requirements, including preparing mandatory fundamental rights impact assessments, dealing with complaints from citizens, and preparing to respond to any complaints accordingly.
- Ongoing compliance: Businesses that use AI will need to consider if any uses fall into the unacceptable or high-risk categories. For any high-risk categories, clients will need to consider how they will comply with the obligations under the EU AI Act. As outlined above, failure to comply with the AI Act could result in significant financial sanctions and associated reputational risk.
UK regulatory landscape
In the UK, the direction of travel appears to remain innovation centric with a push to leverage current law, regulation and guidance and not to issue AI-specific regulation on the basis that legislation should be tech agnostic. Earlier in 2023, the UK government published a white paper, signalling a “pro-innovation approach to AI regulation” which is aligned with an ambition to remain an attractive jurisdiction for innovative business to flourish.
However, with a significant customer base in Europe, the UK’s challenge will remain in accessing neighbouring EU markets that are dependent on compliance with the EU AI Act.
In terms of existing guidance, the UK’s various regulatory bodies have published guidance and working proposals to assist companies that use AI.
- The UK’s Information Commissioner’s Office (ICO) has published a set of “best practice” guidance for data protection compliant AI.
- The UK’s Competition and Markets Authority (CMA) has published a draft set of proposed principles to guide competitive AI markets.
- The Digital Regulation Cooperation Forum (DRCF) has launched a new advisory service to help businesses launch AI and digital innovations safely and in compliance with the existing regulatory landscape and will continue to bring together the views of the ICO, the CMA, Ofcom and the FCA in this area and others.
- In November 2023, a private members bill was introduced into the House of Lords which proposed a set of rules to regulate AI. The members’ bill is very early in the parliamentary process and currently it is not clear that it has or will have government support.
What next?
As cited above, the EU AI Act will likely be passed into law early in 2024, however businesses will have some time to prepare before it begins to apply. Nevertheless, with respect to transactions happening today involving businesses that use AI, scrutiny should be cast over any target companies which could use unacceptable or high-risk AI, including ensuring a consideration of how that AI can be made compliant with the EU AI Act (and what impact that may have on the business and its operational performance).
Preparing an AI road map and running an AI compliance review today will also hold businesses in good stead for when the EU AI Act comes into force.
Usman Wahid, partner, leads the Data, Digital and Technology team for KPMG Law. He is a member of the firm’s FinTech, global business services, chief information officer advisory, cloud transformation and emerging technology risk groups. Isabel Simpson, partner, is a data protection and privacy lawyer in the KPMG Law team. Annalie Grogan, senior manager, is a UK-qualified competition and foreign direct investment lawyer.