Google, Meta, Microsoft, OpenAI and others agree to AI safeguards
Several of the top American companies developing AI have agreed to work with the U.S. government and commit to several principles to ensure public trust in AI, the White House said Friday.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI all signed off on the commitments to make AI safe, secure, and trustworthy. In May, the Biden administration had said that it would meet with leading AI developers to ensure that they were consistent with U.S. policy.
The commitments are not binding, and there are no penalties for failing to adhere to them. The policies can’t retroactively affect AI systems that have already been deployed, either — one of the provisions says that the companies will commit to testing the AI for security vulnerabilities, both internally and externally, before releasing it.
Still, the new commitments are designed to reassure the public (and, to some extent, lawmakers) that AI can be deployed responsibly. The Biden administration had already proposed using AI within government to streamline tasks.
Perhaps the most immediate effects will be felt on AI art, as all of the parties agreed to digital watermarking to identify a piece of art as AI-generated. Some services, such as Bing’s Image Creator, already do this. All of the signees also committed to using AI for the public good, such as cancer research, as well as identifying areas of appropriate and inappropriate use. This wasn’t defined, but could include the existing safeguards that prevent ChatGPT, for example, from helping to plan a terrorist attack. The AI companies also pledged to preserve data privacy, a priority Microsoft has upheld with enterprise versions of Bing Chat and Microsoft 365 Copilot.
All of the companies have committed to internal and external security testing of their AI systems before their release, and sharing information with industry, governments, the public, and academia on managing AI risks. They also pledged to allow third-party researchers access to discover and report vulnerabilities.
Microsoft president Brad Smith endorsed the new commitments, noting that Microsoft has been an advocate for establishing a national registry of high-risk AI systems. (A California congressman has called for a federal office overseeing AI.) Google also disclosed its own “red team” of hackers who try to break AI using attacks like prompt attacks, poisoning data, and more.
“As part of our mission to build safe and beneficial AGI, we will continue to pilot and refine concrete governance practices specifically tailored to highly capable foundation models like the ones that we produce,” OpenAI said in a statement. “We will also continue to invest in research in areas that can help inform regulation, such as techniques for assessing potentially dangerous capabilities in AI models.”
Author: Mark Hachman, Senior Editor
As PCWorld’s senior editor, Mark focuses on Microsoft news and chip technology, among other beats. He has formerly written for PCMag, BYTE, Slashdot, eWEEK, and ReadWrite.
Recent stories by Mark Hachman:
Comcast’s new NOW prepaid Internet looks surprisingly compellingUdio’s AI music is my new obsessionBroadband ‘nutrition labels’ kick in, revealing hidden fees for ISPs