Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > Behind the Curtain: What if they’re right?

Behind the Curtain: What if they’re right?

During our recent interview, Anthropic CEO Dario Amodei said something arresting that we just can’t shake: Everyone assumes AI optimists and doomers are simply exaggerating. But no one asks: “Well, what if they’re right?” We wanted to apply this question to what seems like the most outlandish AI claim — that in coming years, large language models could exceed human intelligence and operate beyond our control, threatening human existence. That probably strikes you as science-fiction hype. But Axios research shows at least 10 people have quit the biggest AI companies over grave concerns about the technology’s power, including its potential to wipe away humanity. If it were one or two people, the cases would be easy to dismiss as nutty outliers. But several top execs at several top companies, all with similar warnings? Seems worth wondering: Well, what if they’re right? Even more people who are AI enthusiasts or optimists argue the same thing. They, too, see a technology starting to think like humans, and imagine models a few years from now starting to act like us — or beyond us. Elon Musk has put the risk as high as 20% that AI could destroy the world. Well, what if he’s right? There’s a term the critics and optimists share: p(doom). It means the probability that superintelligent AI destroys humanity. So Musk would put p(doom) as high as 20%. On a recent podcast with Lex Fridman, Google CEO Sundar Pichai, an AI architect and optimist, conceded: “I’m optimistic on the p(doom) scenarios, but … the underlying risk is actually pretty high.” But Pichai argued that the higher it gets, the more likely that humanity will rally to prevent catastrophe. Fridman, himself a scientist and AI researcher, said his p(doom) is about 10%.

Full commentary : What if predictions of humanity-destroying AI are right?