ChatGPT creator OpenAI showcased what its text-to-video AI generator Sora can do in new videos published on YouTube on Monday and Tuesday.
Sora is OpenAI’s AI filmmaker. It takes any written prompt, like “two golden retrievers podcasting on top of a mountain,” and creates a video that brings the words to life. When OpenAI previewed Sora in February, its potential for deepfake video set off alarm bells for some, especially in an election year.
Though OpenAI has not yet released Sora to the general public, and only opened up access to the AI generator to a select group, it published multiple videos that display what Sora is capable of, from dreamscapes to skateboarding bears. The reality-bending videos illustrate Sora’s potential for filmmaking and advertising.
Related: OpenAI Is Holding Back the Release of Its New AI Voice Generator — Here’s Why
One video released Tuesday was a short one-minute, 32-second film prompted by artist Manuel Sainsily and entrepreneur Will Selviz. The two generated all visuals with Sora but edited and added sound manually. They wanted to ask, “What if our lives are the result of intricate choices crafted long before our current existence?”
The resulting video is cohesive and compelling, moving effortlessly from the physical world to a more ethereal plane.
Another Sora video, released Monday, is in all black and white. It places animals into different historical contexts and situations and shows them interacting with people. For example, a man rides a hippopotamus like a horse and a beaver plays a banjo.
Artist Benjamin Desai created the one-minute, one-second video, and stated that he was “excited to share this imaginative look into an alternate past powered by Sora.”
The final video, released Monday, is the longest, at two minutes and nine seconds. It was created by artist Tammy Lovin, who said it felt like “a dream come true.”
“Ever since I was a kid, I kind of had these montages and surreal visuals about certain things I was seeing in reality, and I’d picture them differently,” Lovin stated. “But since I didn’t become a producer or director, they never really came to life until now.”
Lovin’s video takes viewers from an ocean wave to a scene of walking, then surfing through clouds, to jellyfish.
Companies with access to Sora have already begun using it in public-facing projects. Toys “R” Us became the first to create a brand film with Sora in late June, offering a glimpse to the public of how AI could work in commercials or promotional campaigns.
OpenAI has not revealed the exact videos that went into training Sora, but there have been reports that millions of hours of YouTube videos may have played a role.