In case you missed it, last week’s disclosure by film director Morgan Neville that he used artificial intelligence to recreate the voice of deceased celebrity chef Anthony Bourdain in parts of his new documentary film, “Roadrunner,” has caused a stir.
Some people believe Neville had an ethical obligation to inform the audience that he had done so. Others believe it was well within his artistic license.
Then there are those stuck in the middle, not sure where the line in the sand should be – or if there should even be a line at all.
In the film, Neville uses the AI-created voice to read a line from an email Bourdain wrote to a friend.
On one hand, the words coming out of Bourdain’s mouth weren’t made up out of thin air – which is sometimes the case when AI is used for nefarious purposes.
On the other, the director tried to give the impression that the voice was Bourdain’s. There’s also the fact that Bourdain couldn’t give consent for his voice to be re-created.
University of Virginia media studies associate professor William Little, who teaches a course centered on the use of AI and film, said he understands the controversy. One thing to remember, he said, is that the manipulation of voice in film goes back a long way.
“The voiceover, where you can have someone who’s actually dead narrating the film, who’s speaking from beyond the story or beyond the grave and looking back on events, is a fairly typical film noir technique,” Little said. “One could argue that Bourdain’s AI-manipulated voice represents a version of a voiceover. He is speaking, after a fashion, from beyond the grave.”
“Film will also use the device of the voice that is offscreen – you can’t locate it. It might be a ghostly voice. Coincidentally, a perfect example of such a voice is the voice of the most famous AI in cinema, HAL in ‘2001: A Space Odyssey.’ … You don’t see lips moving, there’s no body attached to it. You kind of sense where the voice is coming from, but you don’t know exactly.”
Still, Little understands the feeling of being duped, which some people who have watched the Bourdain film said they have experienced.
For a deeper dive into the debate, UVA Today caught up with Little, whose research interests focus on genre study of film, history and philosophy of film, and film and critical race studies.

Media studies associate professor William Little teaches a course centered on the use of artificial intelligence and film. (Photo by Sanjay Suchak, University Communications)
Q. You haven’t seen the film, but from what you’ve heard about how the director used AI to mimic Anthony Bourdain’s voice at certain junctures, never disclosing that he did so until after the fact and in a very casual way, what was your reaction?
A. I find this whole phenomenon rather fascinating, as it’s bound up with a very complicated dynamic of betrayal. For me, that’s the richest aspect of this particular issue.
In one regard, there’s the idea that the director betrayed the audience’s confidence in what they are hearing. Since Bourdain didn’t actually vocalize the words at issue, the director broke a tacit contract with the audience.
Likewise, in some quarters there is concern that the director betrayed Bourdain and his family. He couldn’t get Bourdain’s consent, and there may be some question about whether he received the family’s consent. So, I understand that audiences might feel betrayed. I’m not sure I totally agree with that point of view, but I understand it.
I would suggest that such a feeling of betrayal may be linked to a persistent, dystopian narrative about AI as constituting a threat of betrayal. It’s the idea that AI – not to mention those who develop and implement AI – is an untrustworthy tool. In this narrative, AI proves unfaithful to the purpose of improving or enhancing human experience. AI proves disloyal. It is turned against human beings, or, as numerous sci-fi films have dramatized, it turns on human beings.
There is evidence in the real world to support this suspicion about AI. For instance, the use of AI in surveillance, policing and job recruitment has been marked by racial bias.
All this is related, in an interesting way, to the one line in the Bourdain documentary that the director acknowledged was manipulated by AI. Bourdain says to friend, “You’re successful, and I’m successful and I’m wondering, ‘Are you happy?’” I would argue that this line is also bound up with betrayal. In a sense, Bourdain is saying, “Life is letting me down somehow. It’s not what it could or ought to be.” This, to me, is much more fascinating than coming down firmly on one side or the other of the controversy about the director’s use of AI.
Q. How does the fact that this was a documentary play into everything? A documentary is obviously much different than an action movie, right? You sort of have an expectation of what you are watching being “real.” Should there be different expectations based on genres?
A. Good question. I mean, no documentary is objective. It is a creative work of nonfiction. The filmmaker makes creative decisions, at every turn, from the look of the credits to editing, framing, lighting, sound and use of voice. The assumption that a documentary is real begs to be examined carefully. It depends what you mean by “real.”
Bourdain himself was extensively involved in documentary television productions (cooking show; travelogue show) that constitute a form of reality TV, a genre in which what constitutes the real is thoroughly subject to manipulation. To his credit, Bourdain acknowledged that his television work involved acting on his part, that he was performing a role.
So, it would be one thing if Neville had made up dialogue that Bourdain never produced anywhere, at any point, and claimed that it was Bourdain. That, I think, would have been a problem. But he did “say” those words. He delivered them through the medium of writing, not speech.