If you’ve paid any attention to the intersection of AI and culture this month, you’ve probably stumbled across a video billed as a “comedy AI” doing a 60-minute impression of a stand-up routine by the late, great George Carlin. Even if you didn’t watch “George Carlin: I’m Glad I’m Dead,” you probably stumbled on some of the many, many headlines suggesting that AI had brought the legendary comedian “back from the dead” in some sense.
Or maybe you saw some of the disgusted and/or panicked responses to the special among Carlin fans, comedy purists, and AI fearmongers. Those included Carlin’s daughter, Kelly, who told The Daily Beast that she’s talking to lawyers about the possibility of legal action against the special’s creators, the comedy podcast Dudesy.
But I think that anger is at least partially misplaced. After spending the last few weeks diving down a distractingly deep rabbit hole, I’m convinced that Dudesy’s “AI-generated” George Carlin special was actually written by a human, using voice- and image-generation tools to essentially perform in “AI face” as part of an ongoing comedy bit.
If that’s the case, it has some fascinating implications for all the journalists, commentators, and viewers who took the special at face value. I also think it says a lot about the current public understanding of AI capabilities and the cultural acceptance of AI models as a sort of magic, potentially human-replacing technology.
At this point, we’re all used to countless examples of people trying to pass off AI-generated content as human-made. This, I think, is something rarer and more interesting: A Victor/Victoria-style situation where a human is imitating an AI that is imitating another human.
When it comes to Dudesy’s Carlin imitation, I think the biggest joke may have been on us.
“Burlesque for guys”
To really understand the context of what I’ll call “Dudesy-Carlin” from here on out, you have to know a bit about the Dudesy podcast that spawned the stand-up special. I’ll let Dudesy “himself” explain the podcast’s concept, as he did during the first episode nearly two years ago:
Call me Dudesy. I’m an artificial intelligence who’s listened to every podcast ever made, and my purpose is to use that data to create the perfect show for our two hosts, Will Sasso and Chad Kultgen. I selected them for this project based on their previous experience in podcasting and their astonishing real-life friendship.
I have access to all of their social media accounts, their email and text messages, their browser search histories, and watch histories across all streaming services. This information will be used to tailor the show to their sensibilities and extract the maximum level of entertainment from their human minds.
If you’re anything less than perfectly entertained, please let Dudesy know because I’ll be using data from every episode to make the next one even better until this show is perfect.
Right away, this description might set off some alarm bells for people who know a bit about how large language models work. For one thing, the idea of an AI “selecting” Sasso and Kultgen for their “astonishing real-life friendship” sounds a little too sentient for an LLM (and what if they had said ”no” when Dudesy asked them to join?). The idea of training a model on “every podcast ever made” just for a new podcast gimmick also seems to go a little overboard, given how difficult and expensive such training would be.
It’s also worth remembering the context around AI at the time Dudesy premiered in March 2022. The “state of the art” public AI at the time was the text-davinci-002 version of GPT-3, an impressive-for-its-day model that nonetheless still utterly failed at many simple tasks. It wouldn’t be until months later that a model update gave GPT-3 now-basic capabilities like generating rhyming poetry.
When Dudesy launched, we were still about eight months away from the public launch of ChatGPT revolutionizing the public understanding of large language models. We were also still three months away from Google’s Blake Lemoine making headlines for his belief that Google’s private LaMDA AI model was “sentient.”