Bourdain Deepfake Isn’t A Problem

In a 7.15 New Yorker article titled “A Haunting New Documentary About Anthony Bourdain,” Helen Rosner has revealed that director Morgan Neville resorted to a sophisticated voice-editing or voice-replicating process that some on Twitter are tut-tutting about.

It involved the audible creation of passages not actually spoken by Bourdain but written by him. Neville created a deepfake or A.I. replication of Bourdain’s voice, assembled from vowel and consonant splices and fragments of legit Bourdain recordings. And so we hear Bourdain “reading” the passages even though he didn’t actually say them.

Get it? The passages that we hear Bourdain reading were definitely 100% written by him, and the voice we hear reading them is definitely Bourdain’s. But he didn’t actually speak these passages and wasn’t actually recorded saying them. Neville created a highly convincing simulation.

Does someone have a problem with this? Not I because nothing substantive was fabricated. Bourdain wrote the words and passages in question, and Neville’s simulation of Bourdain’s voice reading them is “real” as far as it goes and it’s all straight from the horse’s mouth. So what’s the problem?

If Neville had faked or invented passages that Bourdain hadn’t written, a fully justified ethical scandal would’ve erupted…but he didn’t. The words and thoughts are Bourdain’s.

If Neville had hired a Bourdain-sounding actor to read the passages and then revealed this ruse in the closing credits, nobody would’ve said boo.

But because Neville used Bourdain’s own voice instead of an actor’s, some are calling this an ethical foul. Except there was nothing wrong or even shady about what Neville did. Sophisticated, obviously, but so what? Should Neville have admitted to this in the closing credits of the film? Yes, he should have. But it’s not that big of a deal.

Canadian entertainment reporter David Friend: “The new Anthony Bourdain documentary didn’t have audio of him reading emails, so they created a fake A.I. model of his voice…and didn’t bother disclosing that in the film. We need a serious check on ethics in documentary filmmaking.”

Boston critic Sean Burns: “When I wrote my review I was not aware that the filmmakers had used an A.I. to deepfake Bourdain’s voice for portions of the narration. I feel like this tells you all you need to know about the ethics of the people behind this project.”

The Morally Corrupt Jouan Barquin: “The AI situation is unethical and deeply questionable.

Earlier today Rosner tweeted that “if it had been a human voice double I think the reaction would be ‘huh, ok,’ but there’s something truly unsettling about the idea of it coming from a computer -— which is both a logical and illogical response! I think there’s something to the idea of manipulation, enhancement, and remixing as creative tools, which — meaningfully here — is entirely consistent with how Bourdain worked.”

Passage in question from Rosner’s article: “There is a moment at the end of the film’s second act when the artist David Choe, a friend of Bourdain’s, is reading aloud an e-mail Bourdain had sent him: ‘Dude, this is a crazy thing to ask, but I’m curious,’ Choe begins reading, and then the voice fades into Bourdain’s own: “…and my life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?”

“I asked Neville how on earth he’d found an audio recording of Bourdain reading his own e-mail. Throughout the film, Neville and his team used stitched-together clips of Bourdain’s narration pulled from TV, radio, podcasts, and audiobooks. ‘But there were three quotes there I wanted his voice for that there were no recordings of,’ Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, ‘I created an A.I. model of his voice.’

“In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of the technology. But the seamlessness of the effect is eerie.

“’If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,’ Neville said. ‘We can have a documentary-ethics panel about it later.'”