Entertainment

‘House of David’ Creator Explains Using AI to Create Origin Sequence

The newly released episode 6 of Amazon’s “House of David” opens with a mythical origin sequence for the character Goliath, created in an inventive way by incorporating generative AI tools in the production workflow.

“The entire scene is driven with generative AI tools as the horsepower to the scene. What we found is that these tools work when combined with traditional tools,” Jon Erwin, series creator and co-showrunner, tells Variety. He relates that, originally, this roughly 90-second sequence as scripted was much smaller in scope, but then the filmmakers began to envision more ambitious visuals and incorporate AI applications. “We had gotten permission to use the technology, we had formed a team around it, and [thought] let’s really go for it.”

In all, season one incorporates 72 shots that involved use of AI, and it was a learning process for the filmmakers. “People can make cool images, certainly, and use these tools sort of as consumers, but to really use them in professional ways, it really is how you stack tools together that matters,” Erwin asserts. In the case of “House of David,” the team assembled at his indie studio Wonder Project used AI tools include Midjourney for image augmentation; Magnific and Topaz for uprezing and adding details, and Runway and Kling for generating video, combined with traditional tool such as Unreal Engine, Nuke and Adobe Photoshop and After Effects.

“They all have unique strengths,” Erwin says of the AI tools, citing Runway’s image-to-video application as an example. “Because I don’t want to generate imagery from scratch. I want to augment assets from my show that I’ve already created. So I can start with images from my show or things that we’ve created – things that we own.”

Speaking of the “House of David” origin sequence, he says the combination of AI tools “allowed us to create these photoreal visuals and tell the story, but doing it in a budget and time frame that we could afford.” He declined to share the budget, but said that without AI the sequence would have involved shooting in a desert and/or high-end VFX, both of which “would have been just far outside the budget parameters of the show.”

Speed was also a consideration for the sequence. “You can dream in real time and collaborate on the material much quicker. And so that enabled us to get through this scene in a couple weeks.” He suggests that the sequence might have otherwise taken “four or five months in a traditional process.”

The origin story was an ambitious undertaking, though it also served as an opportunity to experiment. “The sequence in episode 6 is truly a combination of all the difficult things in VFX,” Erwin says, citing photoreal digital characters and simulations such as rain, smoke and wind. “The angel wings have feathers on them. Just the asset built alone would have taken forever,” he adds. “It’s a lot of difficult things in one scene. Character consistency and environmental consistency was really the trick and the hard part about the sequence is how do you make all of these shots match, and how do you make the sequence human and feel organic.

“I was blown away that a lot of these AI tools do physics simulations – water, rain, atmosphere, smoke, wind, far better than any other VFX tool I’ve ever used. And so I was blown away with what we were able to achieve.”

 


Source link

Related Articles

Back to top button