Advertisement

SKIP ADVERTISEMENT

White House Pushes Tech C.E.O.s to Limit Risks of A.I.

In the White House’s first gathering of A.I. companies, Vice President Kamala Harris told the leaders of major tech companies they had a “moral” obligation to keep products safe.

Vice President Kamala Harris and other administration officials met on Thursday with the chief executives of OpenAI, Google, Microsoft and Anthropic to discuss artificial intelligence.Credit...Doug Mills/The New York Times

David McCabe reports on tech policy from Washington.

The White House on Thursday pushed Silicon Valley chief executives to limit the risks of artificial intelligence, in the administration’s most visible effort to confront rising questions and calls to regulate the rapidly advancing technology.

For roughly two hours in the White House’s Roosevelt Room, Vice President Kamala Harris and other officials told the leaders of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to seriously consider concerns about the technology. President Biden also briefly stopped by the meeting.

“What you’re doing has enormous potential and enormous danger,” Mr. Biden told the executives.

It was the first White House gathering of major A.I. chief executives since the release of tools like ChatGPT, which have captivated the public and supercharged a race to dominate the technology.

“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Ms. Harris said in a statement. “And every company must comply with existing laws to protect the American people.”

The meeting signified how the A.I. boom has entangled the highest levels of the American government and put pressure on world leaders to get a handle on the technology. Since OpenAI released ChatGPT to the public last year, many of the world’s biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research. Venture capitalists have poured billions of dollars into A.I. start-ups.

But the A.I. explosion has also raised fears about how the technology might transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that powerful A.I. systems are too opaque, with the potential to discriminate, displace people from jobs, spread disinformation and perhaps even break the law on their own.

Even some of the makers of A.I. have warned against the technology’s consequences. This week, Geoffrey Hinton, a pioneering researcher who is known as a “godfather” of A.I., resigned from Google so he could speak openly about the risks posed by the technology.

Mr. Biden recently said that it “remains to be seen” whether A.I. is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way. Members of Congress, including Senator Chuck Schumer of New York, the majority leader, have also moved to draft or propose legislation to regulate A.I.

Image
Sundar Pichai, Google’s chief executive, left, and Sam Altman, OpenAI’s chief executive, arriving at the White House to meet with the vice president.Credit...Evan Vucci/Associated Press

That pressure to regulate the technology has been felt in many places around the world. Lawmakers in the European Union are in the midst of negotiating rules for A.I., though it is unclear how their proposals will ultimately cover chatbots like ChatGPT. In China, the authorities recently demanded that A.I. systems adhere to strict censorship rules.

“Europe certainly isn’t sitting around, nor is China,” said Tom Wheeler, a former chairman of the Federal Communications Commission. “There is a first mover advantage in policy as much as there is a first mover advantage in the marketplace.”

Mr. Wheeler said all eyes are on what actions the United States might take. “We need to make sure that we are at the table as players,” he said. “Everybody’s first reaction is, ‘What’s the White House going to do?’”

Yet even as governments call for tech companies to take steps to make their products safe, A.I. companies and their representatives have pointed back at governments, saying elected officials need to take steps to set the rules for the fast-growing space.

Attendees at Thursday’s meeting included Google’s chief executive Sundar Pichai; Microsoft’s chief Satya Nadella; OpenAI’s Sam Altman; and Anthropic’s chief executive Dario Amodei. Some of the executives were accompanied by aides with technical expertise, while others brought public policy experts, an administration official said.

Google, Microsoft and OpenAI declined to comment after the White House meeting. Anthropic did not immediately respond to requests for comment.

“The president has been extensively briefed on ChatGPT and knows how it works,” White House press secretary, Karine Jean-Pierre, said at Thursday’s briefing.

The White House said it had impressed on the companies that they should address the risks of new A.I. developments. In a statement after the meeting, the administration said there had been “frank and constructive discussion” about the desire for the companies to be more open about their products, the need for A.I. systems to be subjected to outside scrutiny and the importance that those products be kept away from bad actors.

“Given the role these C.E.O.s and their companies play in America’s A.I. innovation ecosystem, administration officials also emphasized the importance of their leadership, called on them to model responsible behavior, and to take action to ensure responsible innovation and appropriate safeguards, and protect people’s rights and safety,” the White House said.

Hours before the meeting, the White House announced that the National Science Foundation plans to spend $140 million on new research centers devoted to A.I. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.

The meeting and announcements build on earlier efforts by the administration to place guardrails on A.I.

Last year, the White House released what it called a blueprint for an A.I. bill of rights, which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.

But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington. In April, a group of government agencies pledged to “monitor the development and use of automated systems and promote responsible innovation,” while punishing violations of the law committed using the technology.

In a guest essay in The Times on Wednesday, Lina Khan, the chair of the Federal Trade Commission, said the nation was at a “key decision point” with A.I. She likened the technology’s recent developments to the birth of tech giants like Google and Facebook, and she warned that, without proper regulation, the technology could entrench the power of the biggest tech companies and give scammers a potent tool.

“As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” she said.

Katie Rogers contributed reporting.

David McCabe covers tech policy. He joined The Times from Axios in 2019. More about David McCabe

A version of this article appears in print on  , Section B, Page 1 of the New York edition with the headline: White House Pushes Tech To Put Limits On A.I. Risks. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT