US pushes for global protections against threats posed by AI

 

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, March 21, 2023, in Boston. [AP Photo]

U.S. Vice President Kamala Harris said Wednesday that leaders have "a moral, ethical and societal duty" to protect people from the dangers posed by artificial intelligence, as she leads the Biden administration’s push for a global AI roadmap.

Analysts, in commending the effort, say human oversight is crucial to preventing the weaponization or misuse of this technology, which has applications in everything from military intelligence to medical diagnosis to making art.

"To provide order and stability in the midst of global technological change, I firmly believe that we must be guided by a common set of understandings among nations," Harris said. “And that is why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI, and work to create new rules and norms."

Harris also announced the founding of the government’s AI Safety Institute and released draft policy guidance on the government’s use of AI and a declaration of its responsible military applications.

Just days earlier, President Joe Biden – who described AI as "the most consequential technology of our time" – signed an executive order establishing new standards, including requiring that major AI developers report their safety test results and other critical information to the U.S. government.

AI is increasingly used for a wide range of applications. For example: on Wednesday, the Defense Intelligence Agency announced that its AI-enabled military intelligence database will soon achieve "initial operational capability."

And perhaps on the opposite end of the spectrum, some programmer decided to "train an AI model on over 1,000 human farts so it would learn to create realistic fart sounds."

Like any other tool, AI is subject to its users’ intentions and can be used to deceive, misinform or hurt people – something that billionaire tech entrepreneur Elon Musk stressed on the sidelines of the London summit, where he said he sees AI as "one of the biggest threats" to society. He called for a "third-party referee."

Earlier this year, Musk was among the more than 33,000 people to sign an open letter calling on AI labs "to immediately pause for at least six months the training of AI systems more powerful than GPT-4."

"Here we are, for the first time, really in human history, with something that's going to be far more intelligent than us," said Musk, who is looking at creating his own generative AI program. "So it's not clear to me we can actually control such a thing. But I think we can aspire to guide it in a direction that's beneficial to humanity. But I do think it's one of the existential risks that we face and it's potentially the most pressing one."

Unesco calls for action to address risks of using AI in education system

This is also something industry leaders like OpenAI CEO Sam Altman have told U.S. lawmakers in testimony before congressional committees earlier this year.

"My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world. I think that could happen in a lot of different ways," he told lawmakers at a Senate Judiciary Committee on May 16.

That’s because, said Jessica Brandt, policy director for the AI and Emerging Technology Initiative at the Brookings Institution, while "AI has been used to do pretty remarkable things" – especially in the field of scientific research – it is limited by its creators.

"It's not necessarily doing something that humans don't know how to do, but it's making discoveries that humans would be unlikely to be able to make in any meaningful timeframe, because they can just perform so many calculations so quickly," she told VOA on Zoom.

And, she said, "AI is not objective, or all-knowing. There's been plenty of studies showing that AI is really only as good as the data that the model is trained on and that the data can have or reflect human bias. This is one of the major concerns."

Or, as AI Now Executive Director Amba Kak said earlier this year in a magazine interview about AI systems: "The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation."

Analysts say these government and tech officials don’t need a one-size-fits-all solution, but rather an alignment of values – and critically, human oversight and moral use.

"It's OK to have multiple different approaches, and then also, where possible, coordinate to ensure that democratic values take root in the systems that govern technology globally," Brandt said.

Industry leaders tend to agree, with Mira Murati, Open AI’s chief technology officer, saying: "AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values."

Analysts watching regulation say the U.S. is unlikely to come up with one, coherent solution for the problems posed by AI.

"The most likely outcome for the United States is a bottom-up patchwork quilt of executive branch actions," said Bill Whyman, a senior adviser in the Strategic Technologies Program at the Center for Strategic and International Studies. "Unlike Europe, the United States is not likely to pass a broad national AI law over the next few years. Successful legislation is likely focused on less controversial and targeted measures like funding AI research and AI child safety."