The Biden administration says it has reached a deal with seven big tech companies to put more guardrails around artificial intelligence (AI), including the development of a watermarking system to help users know they are looking at AI-generated content.
The agreement marks the White House’s latest effort to rein in what it sees as some of the risks posed by the ever-expanding technology, such as cybersecurity risks and the potential for the spread of dangerous misinformation.
The seven AI companies to convene at the White House and provide their voluntary commitment to a “safe, secure, and transparent” development of the technology were Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
The companies committed to internal and external security testing of their AI systems and sharing best practices.
The Biden Administration said that it is pursuing other avenues so America leads the way in responsible AI innovation, including a forthcoming executive order and helping Congress craft bipartisan legislation.
The AI commitments
The companies committed to internal and external security testing of their AI systems and sharing best practices for safety and attempts to circumvent safeguards with governments, civil society, and academia.
The seven also said they would invest in cybersecurity and insider threat safeguards, especially as it pertains to new and demo products coming online, and to use a reporting mechanism to report vulnerabilities in their AI systems.
To help build the public’s trust in the technology the companies committed to publicly report on AI’s limitations, spell out appropriate and inappropriate uses, and inform the public of when content is AI-generated, such as via a watermarking system.
Finally, the companies also looked forward to the greater promise of the technology, committing to develop and deploy it to address larger causes, “from cancer prevention to mitigating climate change”.
International partnership
The White House said it would continue working with international partners to establish a strong global framework, and to that end, it has told its partners in 20 other countries about the AI voluntary commitment that was reached on Friday.
Those countries are Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
Last October, the Biden Administration published a Blueprint for an AI Bill of Rights designed to safeguard Americans’ safety, focusing especially on preventing algorithmic bias.
Several of these countries have already launched international efforts of their own to rein in the risks, while embracing the promise, of AI.
Japan has spearheaded the G-7 Hiroshima Process, a gathering held in Japan in May that discussed “responsible AI” and global AI governance processes. The UK announced in June it will host a global AI summit bringing together key countries, leading tech companies, and researchers to agree on safety measures and how best to monitor the most significant risks from AI. And India is currently chairing the Global Partnership on AI.
President Biden convened a meeting with AI experts and researchers in San Francisco last month and hosted the CEOs of Google, Microsoft, Anthropic and OpenAI at the White House in May. After the May event, the companies requested that the White House convene a meeting focused specifically on cybersecurity threats.
The commitments outlined in last Friday’s White House fact sheet were crafted during back-and-forth communications between the AI companies, President Biden and Vice President Kamala Harris that began after that May meeting.
Last October, the Biden Administration published a Blueprint for an AI Bill of Rights designed to safeguard Americans’ rights and safety, focusing especially on preventing algorithmic bias in areas like home valuation and other unlawful discrimination.
Several US states did not wait for the federal government to act and have crafted their own legislation to help rein in the risks posed by AI.
The seven leading AI firms
On Friday, most of the companies issued statements saying they would work with the White House, while also emphasizing that the guardrails they agreed to add or enhance were voluntary.
“By moving quickly, the White House’s commitments create a foundation to help ensure the promise of AI stays ahead of its risks,” said Brad Smith, president of Microsoft, which earlier this year invested heavily in OpenAI.
OpenAI said the voluntary commitments outlined last week would “reinforce the safety, security and trustworthiness of AI technology and our services”.
Common Sense Media commended the White House, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations”.
Amazon said it was committed to collaborating with the White House and others on AI. “Amazon supports these voluntary commitments to foster the safe, responsible, and effective development of AI technology,” it said in a statement.
Asked by CNN’s Jake Tapper on Friday about worries he has when it comes to AI, Smith pointed to “what people, bad actors, individuals or countries will do” with the technology. “That they’ll use it to undermine our elections, that they will use it to seek to break in to our computer networks. You know, that they’ll use it in ways that will undermine the security of our jobs,” he said.
But, Smith argued, “the best way to solve these problems is to focus on them, to understand them, to bring people together, and to solve them”.
Are the commitments realistic?
Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that “history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations”.
Again, the commitments in this latest agreement are purely voluntary and there is no enforcement mechanism attached to the commitments.
The AI tech space is a highly competitive one, and if history is any gauge the latest commitment will be a test – both of the US government’s willingness to hold the businesses accountable and private industry’s willingness to self-regulate.
“It’s a great start, but only a start,” said Gary Marcus about the voluntary commitments in an interview with the New York-based news provider, Spectrum NY1. Marcus is an AI expert who testified before Congress about AI technology in June. “It’s voluntary, and we will need laws to mandate these things.”
He added: “The biggest omission is any kind of requirement on the companies to disclose their training data, which we need for many reasons, including fighting bias, understanding the models well enough to mitigate risks and compensating creators whose work is leveraged.”