AI firms strike deal with White House on safety guidelines

President Joe Biden speaks about artificial intelligence in the Roosevelt Room of the White House, July 21, 2023, in Washington, accompanied by leaders of companies building AI products. [AP Photo]

The White House on Friday announced that the Biden administration had reached a voluntary agreement with seven companies building artificial intelligence products to establish guidelines meant to ensure the technology is developed safely.

“These commitments are real, and they’re concrete,” President Joe Biden said in comments to reporters. “They’re going to help … the industry fulfill its fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

The companies that sent leaders to the White House were Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The firms are all developing systems called large language models  (LLMs), which are trained using vast amounts of text, usually taken from the publicly accessible internet, and use predictive analysis to respond to queries conversationally.

In a statement, OpenAI, which created the popular ChatGPT service, said, “This process, coordinated by the White House, is an important step in advancing meaningful and effective AI governance, both in the U.S. and around the world.”

Safety, security, trust

The agreement, released by the White House on Friday morning, outlines three broad areas of focus: assuring that AI products are safe for public use before they are made widely available; building products that are secure and cannot be misused for unintended purposes; and establishing public trust that the companies developing the technology are transparent about how they work and what information they gather.

As part of the agreement, the companies pledged to conduct internal and external security testing before AI systems are made public in order to ensure they are safe for public use, and to share information about safety and security with the public.

Further, the commitment obliges the companies to keep strong safeguards in place to prevent the inadvertent or malicious release of technology and tools not intended for the general public, and to support third-party efforts to detect and expose any such breaches.

Finally, the agreement sets out a series of obligations meant to build public trust. These include assurances that AI-created content will always be identified as such; that companies will offer clear information about their products’ capabilities and limitations; that companies will prioritize mitigating the risk of potential harms of AI, including bias, discrimination and privacy violations; and that companies will focus their research on using AI to “help address society’s greatest challenges.”

The administration said that it is at work on an executive order that would ask Congress to develop legislation to “help America lead the way in responsible innovation.”

Just a start

Experts contacted by VOA all said that the agreement marked a positive step on the road toward effective regulation of emerging AI technology, but they also warned that there is far more work to be done, both in understanding the potential harm these powerful models might cause and finding ways to mitigate it.

“No one knows how to regulate AI — it’s very complex and is constantly changing,” said Susan Ariel Aaronson, a professor at George Washington University and the founder and director of the research institute Digital Trade and Data Governance Hub.

“The White House is trying very hard to regulate in a pro-innovative way,” Aaronson told VOA. “When you regulate, you always want to balance risk — protecting people or businesses from harm — with encouraging innovation, and this industry is essential for U.S. economic growth.”

She added, “The United States is trying and so I want to laud the White House for these efforts. But I want to be honest. Is it sufficient? No.”

‘Conversational computing’

It’s important to get this right, because models like ChatGPT, Google’s Bard and Anthropic’s Claude will increasingly be built into the systems that people use to go about their everyday business, said Louis Rosenberg, the CEO and chief scientist of the firm Unanimous AI.

"We're going into an age of conversational computing, where we're going to talk to our computers and our computers are going to talk back,” Rosenberg told VOA. “That's how we're going to engage search engines. That's how we're going to engage apps. That's how we're going to engage productivity tools.”

Rosenberg, who has worked in the AI field for 30 years and holds hundreds of related patents, said that when it comes to LLMs being so tightly integrated into our day-to-day life, we still don’t know everything we should be concerned about.

"Many of the risks are not fully understood yet,” he said. Conventional computer software is very deterministic, he said, meaning that programs are built to do precisely what programmers tell them to do. By contrast, the exact way in which large language models operate can be opaque even to their creators.

The models can display unintended bias, can parrot false or misleading information, and can say things that people find offensive or even dangerous. In addition, many people will interact with them through a third-party service, such as a website, that integrates the large language model into its offering, but can tailor its responses in ways that might be malicious or manipulative.

Many of these problems will become apparent only after these systems have been deployed at scale, by which point they will already be in use by the public.

“The problems have not yet surfaced at a level where policymakers can address them head-on,” Rosenberg said. “The thing that is, I think, positive, is that at least policymakers are expecting the problems.”

More stakeholders needed

Benjamin Boudreaux, a policy analyst with the RAND Corporation, told VOA that it was unclear how much actual change in the companies’ behavior Friday’s agreement would generate.

“Many of the things that the companies are agreeing to here are things that the companies already do, so it's not clear that this agreement really shifts much of their behavior,” Boudreaux said. “And so I think there is still going to be a need for perhaps a more regulatory approach or more action from Congress and the White House.”

Boudreaux also said that as the administration fleshes out its policy, it will have to broaden the range of participants in the conversation.

“This is just a group of private sector entities; this doesn't include the full set of stakeholders that need to be involved in discussions about the risks of these systems,” he said. “The stakeholders left out of this include some of the independent evaluators, civil society organizations, nonprofit groups and the like, that would actually do some of the risk analysis and risk assessment.”