OpenAI has been showcasing Sora, its artificial intelligence video-generation model, to media industry executives in recent weeks to drum up enthusiasm and ease concerns about the potential for the technology to disrupt specific sectors. The Financial Times wanted to put Sora to the test, alongside the systems of rival AI video generation companies Runway and Pika. We asked executives in advertising, animation and real estate to write prompts to generate videos they might use in their work. We then asked them their views on how such technology may transform their jobs in the future. Sora has yet to be released to the public, so OpenAI tweaked some of the prompts before sending the resulting clips, which it said resulted in better-quality videos. On Runway and Pika, the initial and tweaked prompts were entered using both companies’ most advanced models. Here are the results. Charlotte Bunyan, co-founder of Arq, a brand advertising consultant – “Sora’s presentation of people was consistent, while the actual visualisation of the fantastical playground was faithfully rendered in terms of the descriptions of the different elements, which others failed to generate. “It is interesting that OpenAI changed ‘children’ to ‘people’, and I would love to know why. Is it a safeguarding question? Is it harder to represent children because they haven’t been trained on as many? They opted for ‘people’ rather than a Caucasian man with a beard and brown hair, which is what Sora actually generated, which raises questions about bias. “Pika felt surreal as if you were in a trippy film moment. The children’s version is much better than the League Of Gentlemen surrealness of the adult iteration, but the rest of the environment lacks details from the prompt. I do have a certain fondness for the vibrancy of [Pika’s children’s] version, as it conveys a sense of joy and happiness more strongly than any of the others.”
Full report : How good is OpenAI’s Sora video model — and will it transform jobs?