WASHINGTON — Tech billionaire Elon Musk warned senators in a private gathering on Capitol Hill on Wednesday that artificial intelligence poses a “civilizational risk” to governments and societies, according to a senator in the room.
His remarks came during a first-of-its-kind, closed-door summit on AI featuring a who’s who of Big Tech titans that also included Mark Zuckerberg, Bill Gates, Sundar Pichai and Sam Altman. All 100 senators were invited, though not all attended.
As he left the Capitol after several hours, Musk — the wealthiest person in the world — called the gathering “historic.” He also endorsed the idea of a new federal agency to oversee AI and repeated his warning that artificial intelligence poses a tremendous danger.
“The consequences of AI going wrong are severe so we have to be proactive rather than reactive,” Musk told a gaggle of reporters before ducking into his waiting Tesla.
“The question is really one of civilizational risk. It’s not like … one group of humans versus another. It’s like, hey, this is something that’s potentially risky for all humans everywhere,” he added.
Asked if AI will destroy mankind, Musk paused and replied, “There is some chance that is above zero that AI will kill us all. I think it’s low. But if there’s some chance, I think we should also consider the fragility of human civilization.”
Sen. Cynthia Lummis, R-Wyo., who attended the private gathering, said she was struck by Musk’s phrase, “civilizational risk,” wrote the phrase down in her notebook and showed it to two reporters.
Other panelists, she said, talked about the need for immigration reform to allow more high-tech workers in the U.S. and the need for standards reforms at the National Institute of Standards and Technology.
“You had everything from there to sort of the high-level comment about the civilizational risks associated with AI, which is a very 60,000-foot level remark, and it was everything in between, so I thought it was surprisingly interesting and helpful. And I’m glad I went,” Lummis said.
The bipartisan gathering, dubbed the AI Insight Forum, was hosted by Senate Majority Leader Chuck Schumer, D-N.Y., and Sens. Mike Rounds, R-S.D., Todd Young, R-Ind., and Martin Heinrich, D-N.M. More AI forums will be held through the end of the year, serving as brainstorming sessions about how lawmakers can regulate artificial intelligence.
“We got some consensus on some things … I asked everyone in the room, does government need to play a role in regulating AI and every single person raised their hand, even though they had diverse views,” Schumer told reporters Wednesday. “So that gives us a message here that we have to try to act, as difficult as the process might be.”
Zuckerberg, the CEO of Meta, did not answer questions as he left the summit, but his team provided his prepared remarks from inside the room, where he said the onus is on government to regulate AI.
“I agree that Congress should engage with AI to support innovation and safeguards,” Zuckerberg said. “This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that.”
Altman, the CEO of ChatGPT parent company OpenAI, said he was surprised, given the format, by the broad agreement in the room on “the need to take this seriously and treat it with urgency.”
“I think people all agreed that this is something that we need the government’s leadership on,” Altman told reporters during a break. “Some disagreement about how it should happen, but unanimity this is important and urgent.”
Inside the cavernous Kennedy Caucus Room, the 22 panelists and hosting senators were seated in a U shape. On one side of the room was Musk, the CEO of Tesla, SpaceX and social media site X; on the other side of the room was Zuckerberg, who has clashed with Musk in the past and recently launched a rival to X called Threads.
The daylong, high-profile gathering has its share of skeptics in both parties. Some senators lamented that the so-called AI Insight Forum was closed to the public and the media (reporters were briefly allowed inside the room before the forum began to view the set-up). Sen. Elizabeth Warren, D-Mass., said it would allow tech billionaires to lobby senators behind closed doors about one of the most critical issues facing the country and economy.
“They’re sitting at a big round table all by themselves. All of the senators are to sit there and ask no questions,” said a frustrated Warren, who this week called on the Senate to investigate Musk’s alleged role in thwarting a Ukrainian drone from attacking Russia’s naval fleet last year in the Black Sea.
Schumer dismissed the criticism, noting that three public hearings on AI have been held and that the forums include not just tech billionaires, but labor and civil rights leaders, national security experts and academics.
“It was a very productive meeting. At first blush, you would think that given all the tech people that were there, their voices would be overwhelming,” Randi Weingarten, the president of the American Federation of Teachers, said in an interview. “But what happened instead was there was a lot of consensus about how the safety needs are hugely important to really engage the innovation, and that those two things go hand in hand.”
In an exclusive interview Tuesday with NBC News, Schumer argued that doing nothing on AI is unacceptable.
“AI is going to be the most transformative thing affecting us in the next decades. It’s going to affect every aspect of life. It has tremendous potential to do some really good things: cure cancer, make our food supply better, deal with our national security, help our education. It has tremendous potential to do bad things: allow continuation of bias, throw many people out of work and even let some of our adversaries get ahead of us,” Schumer said.
“When it’s something this difficult and this pervasive and this changing — it’s changing rapidly — the average instinct of Congress is ‘Let’s ignore it; let someone else do it,’” he continued. “There is no one else to do it. We can’t be like ostriches and put our heads in the sand, because if government doesn’t involve itself in putting in some real guardrails, this thing could run amok.”
Two tech executives warned senators at a public hearing Tuesday that an emergency brake is needed for critical systems run by AI, like power grids or water supplies, to protect humans from potential harms caused by the emergent technology.
In addition to Musk, Zuckerberg, Gates and Altman, the CEOs of Google, IBM, Microsoft, Nvidia and Palantir were on hand at Wednesday’s forum, along with the heads of labor, human rights and entertainment groups. They include Elizabeth Shuler, the president of the AFL-CIO; Charles Rivkin, the chairman and CEO of the Motion Picture Association; Janet Murguía, the president of UnidosUS; and Maya Wiley, the president and CEO of the Leadership Conference on Civil & Human Rights.
Wednesday’s inaugural AI forum was scheduled to run seven hours, with a break for lunch. Schumer and Rounds moderated the discussion, with help from Heinrich and Young, aides said. Senators weren’t expected to get an opportunity to directly ask questions of the tech execs; the usually loquacious senators have been instructed to submit written questions.
While organizers emphasize the bipartisan nature of the forum, Sen. Josh Hawley, R-Mo., said he chose not to attend.
“I think the idea that it is some great breakthrough to hear from the biggest monopolists in the world — and that they are going to share with us their great wisdom — I just think the whole framework is wrong,” said Hawley, who announced a bipartisan AI framework with Sen. Richard Blumenthal, D-Conn.
“You got to take it with a grain of salt. You got to realize that they’re interested parties, right? They stand to make a lot of money on this, which is fine,” he continued, “but you got to know that I just think the whole framing that ‘Oh, aren’t we so graced by their presence?’ — I mean, give me a break. These people are — they’ve done bad things for our country.”