Entertainment

As OpenAI Unveils Sora, Hollywood Crews Debate Its Use – The Hollywood Reporter

As OpenAI Unveils Sora, Hollywood Crews Debate Its Use – The Hollywood Reporter

Picture this: In a future not too far away, HBO is mulling whether to greenlight a new Game of Thrones spinoff but is on the fence about the project. So instead of dumping tens of millions of dollars to shoot a pilot it might wind up passing on, it uses a generative artificial intelligence system trained on its library of shows to create a rough cut in the style of the original. It ultimately decides not to move forward with the title. That process sans AI cost HBO troves of cash and time when it was mulling a potential successor to Thrones in 2018. A cast headed by Naomi Watts was assembled and massive new sets were built. All in all, HBO spent roughly $35 million to shoot a pilot that never saw the light of day. The cost of doing it with AI? A fraction of that figure.

The role of AI in the entertainment industry was a sticking point in talks during dual strikes by actors and writers last year, with the unions eventually negotiating guardrails on use, but the kind of tech capable of overhauling traditional production processes and outright replacing skilled workers was still thought to be years away.

Enter OpenAI’s Sora, which was unveiled Feb. 15 and marks the Sam Altman-led startup’s first major encroachment into Hollywood. The system can seemingly produce high-quality videos of complex scenes with multiple characters and an array of shots with mostly accurate details of subjects in relation to their backgrounds. A demo touted short videos the company said were generated in minutes in response to a text prompt of a couple of sentences. It included a movie trailer of an astronaut traversing a desert planet and an animated scene of an expressive cat-like creature kneeling beside a melting red candle. “In the current iteration, there are still a lot of weird quirks, like objects randomly appearing and disappearing and changing shapes, so I don’t think it would be suitable for high-production-value television or cinema,” says AI researcher Gary Marcus. “It’s great for quick prototypes, though.”

Concept artists and workers in VFX, among other positions that are seen as under fire by AI, are taking stock of potential displacement down the road if the tech advances at the pace it has been. But some are mostly unconcerned, having already incorporated AI tools into their workflow. “The industry will always look for the lowest-cost way of doing anything,” says David Stripinis, a VFX industry veteran who has worked on Avatar, Man of Steel and Marvel titles. “I’m going to use this tech because I have two choices: Embrace it or try to hold back the dam with hope.”

Others warn that the rise of text-to-video systems like Sora will hasten the elimination of jobs. “This is going to become a wild race to the bottom where no one in labor wins,” says Karla Ortiz, a concept artist who has worked on Marvel movies and is credited with the main character design for Doctor Strange. “It’s going to demolish our industry.” 

A study published in January surveying 300 leaders across the entertainment industry reported that three-fourths of respondents indicated that AI tools supported the elimination, reduction or consolidation of jobs at their companies. During the next three years, it is estimated that nearly 204,000 positions will be adversely affected. Sound engineers, voice actors and workers in VFX and other postproduction jobs were identified as vulnerable. 

Even now, art departments are electing to use AI art generators in concept design. Several artists who spoke with The Hollywood Reporter say Midjourney is being used to come up with initial renderings and that they are later asked to “clean up” those works, lowering their billed hours and the total pool of available jobs. Another application: conceiving of a design using VFX before actually sinking money into it. “It’s going to be everywhere,” Stripinis says. “The biggest problem you have in VFX is that people don’t know what they want. And when they tell you what you did isn’t right, what you’ve done is spend $15,000 to get the wrong idea. With this tech, the director will be able to feed footage onto a greenscreen and say, ‘This is what I want.’ ”

Shot planning and storyboarding are other areas where text-to-video systems can come into play. Sora appears to allow for the switching of shots, including close-ups, tracking and aerials, as well as the changing of shot compositions. Cinematographers may choose to chart their planned shots for a project using AI tools. One limitation with the tech, however, is the short duration of clips that can be generated. OpenAI’s Sora can make videos no longer than a minute. “That doesn’t cut it for feature films and TV,” says Jim Geduldick, a VFX supervisor with credits on Masters of the Air, Robert Zemeckis’ live-action/CG Pinocchio and Avatar: The Last Airbender who has adopted AI into his workflow. “We’d be sitting there forever rendering out 60-second chunks.”

Widespread adoption of AI tools in the movie­making process will depend largely on how courts land on novel legal issues raised by the tech. Among the few considerations holding back further deployment of AI is the specter of a court ruling that the use of copyrighted materials to train AI systems constitutes copyright infringement. Another factor is that AI-generated works are not eligible for copyright protection. “No way that this doesn’t step in and eat people’s lunch,” says Reid Southen, a concept artist who is credited on The Hunger Games, Transformers: The Last Knight and The Woman King. “That’s just the way the business works.” 

This story appeared in the Feb. 21 issue of The Hollywood Reporter magazine. Click here to subscribe.


Source link

Related Articles

Back to top button