Parts Unknown: The Most Fascinating Facets of the Anthony Bourdain Controversy

Anthony Bourdain standing next to the water

The debate over whether it was ethical to use an AI-created voice in the new Anthony Bourdain documentary rages on. (Photo courtesy of CNN/Focus Features)

In case you missed it, last week’s disclosure by film director Morgan Neville that he used artificial intelligence to recreate the voice of deceased celebrity chef Anthony Bourdain in parts of his new documentary film, “Roadrunner,” has caused a stir.

Some people believe Neville had an ethical obligation to inform the audience that he had done so. Others believe it was well within his artistic license.

Then there are those stuck in the middle, not sure where the line in the sand should be – or if there should even be a line at all.

In the film, Neville uses the AI-created voice to read a line from an email Bourdain wrote to a friend.

On one hand, the words coming out of Bourdain’s mouth weren’t made up out of thin air – which is sometimes the case when AI is used for nefarious purposes.

On the other, the director tried to give the impression that the voice was Bourdain’s. There’s also the fact that Bourdain couldn’t give consent for his voice to be re-created.

University of Virginia media studies associate professor William Little, who teaches a course centered on the use of AI and film, said he understands the controversy. One thing to remember, he said, is that the manipulation of voice in film goes back a long way.

“The voiceover, where you can have someone who’s actually dead narrating the film, who’s speaking from beyond the story or beyond the grave and looking back on events, is a fairly typical film noir technique,” Little said. “One could argue that Bourdain’s AI-manipulated voice represents a version of a voiceover. He is speaking, after a fashion, from beyond the grave.”

“Film will also use the device of the voice that is offscreen – you can’t locate it. It might be a ghostly voice. Coincidentally, a perfect example of such a voice is the voice of the most famous AI in cinema, HAL in ‘2001: A Space Odyssey.’ … You don’t see lips moving, there’s no body attached to it. You kind of sense where the voice is coming from, but you don’t know exactly.”

Still, Little understands the feeling of being duped, which some people who have watched the Bourdain film said they have experienced.

For a deeper dive into the debate, UVA Today caught up with Little, whose research interests focus on genre study of film, history and philosophy of film, and film and critical race studies.

William Little headshot

Media studies associate professor William Little teaches a course centered on the use of artificial intelligence and film. (Photo by Sanjay Suchak, University Communications)

Q. You haven’t seen the film, but from what you’ve heard about how the director used AI to mimic Anthony Bourdain’s voice at certain junctures, never disclosing that he did so until after the fact and in a very casual way, what was your reaction?

A. I find this whole phenomenon rather fascinating, as it’s bound up with a very complicated dynamic of betrayal. For me, that’s the richest aspect of this particular issue.

In one regard, there’s the idea that the director betrayed the audience’s confidence in what they are hearing. Since Bourdain didn’t actually vocalize the words at issue, the director broke a tacit contract with the audience.

Likewise, in some quarters there is concern that the director betrayed Bourdain and his family. He couldn’t get Bourdain’s consent, and there may be some question about whether he received the family’s consent. So, I understand that audiences might feel betrayed. I’m not sure I totally agree with that point of view, but I understand it.

I would suggest that such a feeling of betrayal may be linked to a persistent, dystopian narrative about AI as constituting a threat of betrayal. It’s the idea that AI – not to mention those who develop and implement AI – is an untrustworthy tool. In this narrative, AI proves unfaithful to the purpose of improving or enhancing human experience. AI proves disloyal. It is turned against human beings, or, as numerous sci-fi films have dramatized, it turns on human beings.

There is evidence in the real world to support this suspicion about AI. For instance, the use of AI in surveillance, policing and job recruitment has been marked by racial bias.

All this is related, in an interesting way, to the one line in the Bourdain documentary that the director acknowledged was manipulated by AI. Bourdain says to  friend, “You’re successful, and I’m successful and I’m wondering, ‘Are you happy?’” I would argue that this line is also bound up with betrayal. In a sense, Bourdain is saying, “Life is letting me down somehow. It’s not what it could or ought to be.” This, to me, is much more fascinating than coming down firmly on one side or the other of the controversy about the director’s use of AI.  

Q. How does the fact that this was a documentary play into everything? A documentary is obviously much different than an action movie, right? You sort of have an expectation of what you are watching being “real.” Should there be different expectations based on genres?

A. Good question. I mean, no documentary is objective. It is a creative work of nonfiction. The filmmaker makes creative decisions, at every turn, from the look of the credits to editing, framing, lighting, sound and use of voice. The assumption that a documentary is real begs to be examined carefully. It depends what you mean by “real.”

Bourdain himself was extensively involved in documentary television productions (cooking show; travelogue show) that constitute a form of reality TV, a genre in which what constitutes the real is thoroughly subject to manipulation. To his credit, Bourdain acknowledged that his television work involved acting on his part, that he was performing a role.

So, it would be one thing if Neville had made up dialogue that Bourdain never produced anywhere, at any point, and claimed that it was Bourdain. That, I think, would have been a problem. But he did “say” those words. He delivered them through the medium of writing, not speech.

Q. When people are discussing AI technology, “deep fakes” come up a lot. Can you explain to UVA Today readers who may not be that familiar with the term what that means and why they can be so dangerous?

A. A deep fake is a visual recording – a photograph, a video, a piece of film – of what appears to be one or more individuals simply captured on camera. In fact, the recording depicts something that was never recorded and never took place. The document is created through elaborate alteration and synthesis of existing data (e.g., facial expression, vocal intonation) to create a piece of visual misinformation, an empirically false or “fake” recording. The thing looks seamless. The viewer cannot tell it is fake. You wouldn’t know that it’s not actually [that person] saying those words or performing those actions.

Deep fakes can have dangerous real-world consequences. Words can be placed in the mouth of a public official. More commonly, deep fake technology has been used for the misogynistic, terribly harmful purpose of generating revenge porn.

Q. The AI field has been exploding in the last few years. Do you think there needs to be some basic rules, perhaps a governing body, so this doesn’t turn into the wild West?

A. Yes. I’m not an expert on policy, but I do think policies need to be developed for phenomena such as deep fakes. But deep fakes are one thing. There are other artifacts that are short of deep fakes that are situated on that slippery slope downward toward a deep fake, and I’m not sure whether policies are needed for those.

Q. Can you tell us a little bit about the “Rendering AI: Cinema and Artificial Intelligence” course that you teach and how it might tie in with this Anthony Bourdain controversy?

A. As the course title suggests, I focus on film that dramatizes representations of AI to explore with students the complex social, economic, political and religious implications of our ever-accelerating entanglement with this new technology.

For example, our entanglement with AI is inseparable from questions and concerns about voice. Spike Jonze’s film “Her” (2013), in which a male figure falls in love with an operating system that is a bodiless voice, provides a rich source for discussion. Is it a stretch to say each of us is in love with our phone, a machine that involves AI-powered technology? We keep it close by. We fear losing it. We speak to it. It speaks to us.

The film also points to the problem that AI-powered voice assistants have traditionally been coded as female. There is a long history to such gender assignment. The contemporary female assistant, like Siri, bears resemblance to the switchboard operator and to the spiritualist medium, the latter of whom was thought to put clients in touch with the voice of the dead. In this context, I find it intriguing that Neville, with the help of AI, acted a bit like a latter-day medium. He brought Bourdain back from the dead, in vocal form. Is that, in part, what troubles those disturbed by Neville’s use of the technology? Do they feel he has acted like a con artist? But Neville wasn’t making the voice up out of thin air.

In the course, we also consider how AI-powered voice technology reflects a fantasy about perfect communication. In this fantasy, communication is seamless. It eliminates the static in interpersonal communication, not to mention the drags in getting from one place to another and in the purchase of goods. Ideally, the AI hears us perfectly, understands exactly what we want, gets us what we want immediately. At the same time, we know there are glitches – Siri misunderstands you and problems ensue. There is no transcending the hard truth that communication is always, at some level, scrambled.

Back to the controversy. The criticism of Neville is based, at least in part, on the idea that it is wrong to translate Bourdain’s words from writing to speech. However, this idea presumes speech is a medium in which the producer of the message is in a position to consent to what he or she says. But it’s not that simple. Consider the slip of the tongue or the utterance we wish we could take back. Consider how often we hear our own voice on a recording and say, “Gosh, that doesn’t sound like me at all.”

Q. Does your course get into some of the ethical questions about AI, and do you envision discussing the Bourdain controversy this fall?

A. Oh, yeah. Sure. Absolutely. The controversy would enable us to take a different slant on how cinema renders AI. One of the challenges I face is that most of the narrative film that focuses on AI is hamstrung in two ways. The first is that it almost always depicts AI in robotic form. It’s “Terminator” or, more recently, “Blade Runner 2049,” whereas, in reality, AI is being implemented in ways that are much more abstract – like with this Bourdain case. It’s not a robot. And the other way narrative film is hamstrung is that it is so often the case that the robot is a female figure or a kind of terrifying, demonic male figure. So cinema, in some ways, is trying to catch up to what is actually happening in the world.

An intriguing screen representation of AI in non-robotic form is, from my point of view, a documentary titled “AlphaGo,” which is about a contest between the world champion in the ancient game of Go and a computer created by Google. The contest was watched by millions of people, particularly in Asia. It’s fascinating and unsettling to see how the computer dramatized machine learning and what the human opponent learned through the process of playing against, or should I say with, the machine.

Media Contact

Whitelaw Reid

University of Virginia Licensing & Ventures Group