Start your day with intelligence. Get The OODA Daily Pulse.
OpenAI safety researcher Steven Adler announced on Monday he had left OpenAI late last year after four years at the company. In a post shared on X, Adler criticized a race toward AGI that is taking shape between leading AI labs and global superpowers. “An AGI race is a very risky gamble, with huge downside,” he said. “No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.” Alignment is the process of keeping AI working toward human goals and values, not against them. Adler worked as an AI safety lead at OpenAI, leading safety-related research and programs for product launches and speculative long-term AI systems, per his LinkedIn profile. He’s also listed as an author on several of OpenAI’s blog posts. In the X post announcing his exit from the company, he called his time at OpenAI “a wild ride with lots of chapters,” adding he would “miss many parts of it.” However, he said he was personally “pretty terrified by the pace of AI development.” “When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?” he said.
Full story : Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI.