Start your day with intelligence. Get The OODA Daily Pulse.
Welcome to the next “holy cow” moment in AI, where your words transform into smooth, highly realistic, detailed video. So long, reality! Thanks for all the good times. OpenAI won’t publicly release Sora, its new text-to-video tool, until later this year. Still, it’s already showing us how easy it could be to replace many people involved in video productions with some well-written prompts and a lot of processing power. I sent the company a few prompts of my own, because who doesn’t want to see a mermaid reviewing a smartphone with her crab assistant? Or a bull strolling daintily through a china shop? When OpenAI began previewing videos made with the generative-AI tool last month, the internet understandably lost its mind. Other AI video technology has produced choppy, low-resolution clips. These looked like something out of a nature documentary or big-budget film. Sora brings new intensity to the now-familiar AI Feelings Loop—amazement about the capability followed by fear for society. Murati assured me OpenAI is taking a measured approach to releasing this powerful tool. That doesn’t mean everything’s gonna be all right. I’d already been wowed by Sora-generated videos: drone shots of the Amalfi Coast, a corgi with a selfie stick and an animated otter on a surfboard. I asked OpenAI for something more familiar to my life: “Two professional women, both with brown hair and in their 30s, sitting down for a news interview in a well-lit studio.” The mouth and hair movements, the details on the leather jacket—it all looks so real. Murati said the 20-second 720p-resolution clip took a few minutes to generate. There’s also no sound. Murati said they plan to add that eventually.
Full feature : CTO Mira Murati explains the company’s new Sora AI video tool and how it plans to roll it out.