Expect more focus on AI in the coming weeks as the UK Government opens an inquiry into how large language models (LLMs) are being deployed, with leading figures in the AI sector lined up to give evidence.
The inquiry, which will hold its first oral evidence session Tuesday 12 September, was set up to “examine large language models and what needs to happen over the next 1-3 years to ensure that the UK can respond to their opportunities and risks”. The value of the global generative AI market is projected to be £51.8bn ($64.7bn) by 2028.
Experts scheduled to give evidence at the session are:
- Ian Hogarth, chair, Foundation Model Taskforce;
- Jean Innes, incoming CEO, The Alan Turing Institute;
- Professor Neil Lawrence, DeepMind professor of machine learning, University of Cambridge; and
- Ben Brooks, head of public policy, Stability AI.
Open and closed language models
The session has been structured to look at a number of areas, including how LLMs differ from other forms of AI and how they are likely to evolve; the differences between open and closed language models; and the role for government in responding to the opportunities and risks LLMs present.
Hogarth warned recently that cybercriminals could use AI to attack the country’s National Health Service, and last month the National Risk Register classified AI as a long-term security threat to the UK’s safety and critical systems.
The session will take place in the wake of the first progress report from the UK Government’s Frontier AI Taskforce, which has pulled together an expert advisory board and set up partnerships with technical organizations in just 11 weeks. It’s ambitious aim is to “give AI researchers inside the Government the same resources to work on AI safety that they would find at leading companies”.
Then, on November 1, the UK hosts the Global AI Safety Summit. So AI and how to regulate and deploy it is set to be a major topic over the next few months.