Artificial intelligence is seeping into our everyday lives, whether we like it or not. This evolving application of machine learning and algorithmic analysis raises questions about the extent to which we know we are interfacing with something automated. And a key issue is the method by which and parameters within which a bot has been trained. Eric Schmidt, the former chief executive of Google, is launching a $125m philanthropic project to fund research into AI that will seek to resolve ‘hard problems’ with it, notably its scientific limits and the risks of harm, misuse, and bias.
And the UK’s Bank of England and Financial Conduct Authority launched the Artificial Intelligence Public-Private Forum at the end of 2020, to further the conversation about AI between the public and private sectors. It has just published its final report, which aims to “advance the collective understanding and promote further discussions amongst academics, practitioners and regulators to support the safe adoption of AI in financial services”.
Geopolitics has never been more important, and inevitably there is a global arms race around applying this innovative technology. While China and Russia have declared their collaboration in developing AI to further their domestic and foreign policy objectives, the United States has the strongest hand in terms of AI’s academic and commercial application. It will seek to influence the shape of AI’s regulation and control to suit its continued dominance of the technology.
Standard setter regulation
The EU showed the world that it could be a standard setter with its pervasive regulation of privacy and personal data control. The General Data Protection Regulation (GDPR) is the foremost regulatory standard that has been used as the basis for the creation and modernization of data regimes in many other national jurisdictions. The EU has now started a similar process to regulate AI.
This fascinating area of programming is changing so rapidly, and will have such a significant impact on our lives, that inevitably the powers that govern us are extending their control as they seek to develop a regulatory environment around AI that protects basic human rights but, at the same time, does not stifle innovation that could offer breakthrough benefits to mankind.
We will regularly cover developments in what is a large and dynamic area, including:
- national and regional legislation, with particular focus on the Bill of Artificial Intelligence Rights in the US and the EU’s proposed Artificial Intelligence Act;
- guidance given to national securities regulators around the world last year by the International Organization of Securities Commissions to help them to regulate and supervise intermediaries’ and asset managers’ use of AI;
- early forays into direct regulation of firms supervised by the US Securities and Exchange Commission, spearheaded by the active chairman Gary Gensler, who has significant expertise in this area;
- ethics in AI, practical risk management to help to self-regulate, and the impact on humans of robots and the replacement/augmentation debate.
There are some vast challenges ahead across the spectrum of AI and its various applications in areas such as recruitment, trading, warfare, robotic employment, state oversight, profit/charity, diversity/bias, transparency, and disclosure.
We will weigh up the arguments and provide a sharp focus on the issues shaping the regulatory framework that affects the corporate users of artificial intelligence and their customers.