TV-Film

Ethical and Artistic AI? Independent Filmmakers Discuss

The tech companies behind AI continue to promise filmmakers the ability to do more with less. Historically, that’s a pitch that has been a draw to indie filmmakers. Dating back to the advent of sync sound 16mm film cameras in the 1960s, digital video in the late 1990s, and the inexpensive DSLR cameras in the 2000s, independent and non-fiction filmmakers were at the forefront of experimenting with new technologies to find ways to tell stories, many of which premiered at Sundance. But when it comes to AI, many of those at the 2025 edition of the festival are highly skeptical it can be a tool used to make personal films, while the ethical issues surrounding it make it a virtual non-starter for many.

'Riff Raff'

This question of “How Filmmakers Can Ethically and Artistically Use AI” was the topic of a panel at the IndieWire Sundance studio, presented by Dropbox. Filmmaker and Asteria founder Bryn Mooser, Archival Producers Alliance co-director Stephanie Jenkins, journalist/director David France, and filmmaker and Promise co-founder/CCO Dave Clark covered a wide array of topics. They offered some practical advice to filmmakers, all of which you can watch in the video at the top of the page.

Having seen up close the rapid development of GenAI in the last two years, along with the greed of the corporations pushing these billion-dollar technologies, the panelists did little to calm fears of the dangers AI posed to the art form. That danger, though, is why most believed it was incumbent on independent filmmakers to experiment with AI.

“I want the filmmakers to lead this revolution,” said Clark. “Because that’s the only way that this is actually going to benefit the industry. If we let someone who doesn’t even understand storytelling all of a sudden start telling stories, then you’re going to see some problems.”

For France, a celebrated journalist turned Oscar-nominated documentarian, the ethical dangers of the technology are also what can make it a powerful creative tool for good.

“I’ve been really fascinated by the kind of the dual ethical morality of technology, and how to find ways to make it work for the good,” said France.

As an example, France pointed to his documentary “Welcome to Chechnya,” which used the same deepfake technology being weaponized for so much evil — revenge porn, fake news, identity theft — to protect the identity of his LGBTQ subjects, who were refugees escaping anti-gay purges in Russia.

“I call it ‘DeepTruths’ because what it did was by changing people’s faces, it allowed them to tell their stories, and it allowed us to embed with them and experience their journey as they were running from this horrible regime,” said France. “It didn’t impact any aspect of what they said, or how they said it, or what they felt in their micro-expressions to carry through, thanks to AI.”

A still from Free Leonard Peltier by Jesse Short Bull and David France, an official selection of the 2025 Sundance Film Festival. Courtesy of Sundance Institute.
‘Free Leonard Peltier’

In his new film, “Free Leonard Peltier,” which explores the history of the eponymous Indigenous activist and his conflict with the FBI, resulting in Peltier’s 50 years of incarceration, France returned to AI to fill in holes in his historical film’s archive and partnered with Mooser’s Asteria to use GenAI to produce re-enactments he couldn’t afford to shoot.

For Jenkins, who helped lead the creation of the archival producer’s guidelines of how documentarians should use GenAI, the existential threat of the new technology is that it’s photorealistic results are mistaken for primary sources and enter the historical record as such. France and his team followed the guidelines by making sure the re-enactments visually could not be confused for archive, while also disclosing and openly discussing their use of GenAI.

“Trust is something that is really hard to gain, but really easy to lose, and in documentary, it plays with truth. That’s the amazing thing about our genre,” said Jenkins. “But nobody wants to be fooled, so [the APA] thinks it’s important, especially in this transition time when AI is new, any time [AI could be confused for real], just label it, let people know, talk about it in the press [gesturing to what France was doing on the panel], and that way people are going to trust you more.”

In “Free Leonard Peltier,” France also used AI sound tools to get around another constraint — the FBI wouldn’t allow Peltier to sit for any interviews. All they had was the sound from poorly recorded phone conversations from inside the prison, and Peltier’s own writing. But with Peltier’s permission, France used (and fully disclosed) AI to generate high quality audio, that sounded exactly like Peltier’s voice, and used the activist’s own words and writing. This discussion of AI sound in “Free Leonard Peltier” led the panel to debate the recent controversy surrounding the use of AI to fix the Hungarian accent of actors in the Oscar-nominated film “The Brutalist.”

“I think with that one, the issue was transparency,” said Jenkins. “Maybe if there had been a line, or if they had talked about it on their press tour, or maybe there was a an accent coach that talked about it, I don’t think it would have necessarily been as much of a problem, but it definitely it does come down to education.”

THE BRUTALIST, Adrien Brody, 2024. © A24 / Courtesy Everett Collection
‘The Brutalist’Courtesy Everett Collection

The lack of open discussion about the use of AI during this early adaptation phase of the technology was something each panelist felt was only exasperating problems. Mooser worried controversies like that of “The Brutalist” were doubly dangerous because it not only would motivate more filmmakers to not disclose the use of AI, but also a minor and rather insignificant use of AI getting magnified by the Oscar race only served to distract from the real dangers posed by AI to creators.

“I think that it’s always important to think about, when you’re talking about this, you pay attention to what catches fire that people are upset about,” said Mooser, who then listed off examples of major AI developments that posed huge ethical and creative concerns but went largely undebated and unchallenged. “There’s a lot of things that are really dangerous about AI .. and it was a problem in the [WGA and SAG] strikes too, which was that there was not enough information about the things that we should be really mad about.”

Clark agreed with Mooser, adding “The Brutalist” controversy was a case of “what a dirty word” AI had become.

“From my standpoint, as long as the artist is making those decisions [of how to use AI],” said Clark. “Because if you’ve seen ‘The Brutalist,’ I mean obviously you can see that Adrien Brody is a world class performer and artist, and the fact that would take away from [his] performance, even though they admitted that it was only a couple lines of dialogue to fix his Hungarian accent, that to me shows you what we need to fix is this conversation. Because if there’s [any project to make someone anti-AI], it should not be against ‘The Brutalist,’ which is an incredible film shot on film.”


Source link

Related Articles

Back to top button