David Greene had never heard of NotebookLM, Google’s buzzy artificial intelligence tool that spins up podcasts on demand, until a former colleague emailed him to ask if he’d lent it his voice.
“So … I’m probably the 148th person to ask this, but did you license your voice to Google?” the former co-worker asked in a fall 2024 email. “It sounds very much like you!”
Mr Greene, a public radio veteran who has hosted NPR’s “Morning Edition” and KCRW’s political podcast “Left, Right & Center,” looked up the tool, listening to the two virtual co-hosts – one male and one female – engage in light banter.
Sign up to The Nightly’s newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.
“I was, like, completely freaked out,” Mr Greene said. “It’s this eerie moment where you feel like you’re listening to yourself.”
Mr Greene felt the male voice sounded just like him – from the cadence and intonation to the occasional “uhhs” and “likes” that Greene had worked over the years to minimise but never eliminated. He said he played it for his wife and her eyes popped.
As emails and texts rolled in from friends, family members and co-workers, asking if the AI podcast voice was his, Mr Greene became convinced he’d been ripped off.
Now he’s suing Google, alleging that it violated his rights by building a product that replicated his voice without payment or permission, giving users the power to make it say things Greene would never say.
Google told The Washington Post in a statement on Thursday that NotebookLM’s male podcast voice has nothing to do with Mr Greene.
Now a Santa Clara County, California, court may be asked to determine whether the resemblance is uncanny enough that ordinary people hearing the voice would assume it’s his – and if so, what to do about it.
The case is the latest to pit the rights of individual human creators against those of a booming AI industry that promises to transform the economy by allowing people to generate uncannily lifelike speech, prose, images and videos on demand.
Behind the artificial voices in NotebookLM and similar tools are language models trained on vast libraries of writing and speech by real humans who were never told their words and voices would be used in that way – raising profound questions of copyright and ownership.
From political “voicefakes” to OpenAI touting a female voice for ChatGPT that resembled that of actress Scarlett Johansson, to deepfake scam ads that had a virtual Taylor Swift hawking Le Creuset cookware, the issues raised by Mr Greene’s lawsuit are “going to come up a lot,” said James Grimmelmann, a professor of digital and information law at Cornell University.
A key question for the courts to decide, Mr Grimmelmann said, will be just how closely an AI voice or likeness has to resemble the genuine article in order to count as infringing.
Another will be whether Mr Greene’s voice is famous enough for ordinary people to recognise it when they listen to NotebookLM and whether he’s harmed by the resemblance.
Those can be thorny questions when it comes to AI voices. There are software tools that can compare people’s voices, but they’re more commonly used to find or rule out an exact match between the voices of real humans, rather than a synthetic one.
To Mr Greene, the resemblance of the AI voice to his own is uncanny – and the harm is deeper and more personal than just a missed chance to capitalise on his most recognisable asset.
“My voice is, like, the most important part of who I am,” Mr Greene said.
“These allegations are baseless,” Google spokesperson José Castañeda said. “The sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired.”
Mr Greene’s lawyer argues the recordings make the resemblance clear.
“We have faith in the court and encourage people to listen to the example audio themselves,” said Joshua Michelangelo Stein, a partner at the firm Boies Schiller Flexner, which is also representing book authors in a high-profile AI copyright lawsuit against Meta.
NotebookLM’s “Audio Overviews” feature made a splash on its 2024 release with AI enthusiasts who shared examples of using it to summarise long documents, replacing dozens of pages of text with a breezy podcast that highlighted the main points.
While Google hasn’t disclosed how many people use the tool, it emerged as a sleeper hit for the search giant in its race with rivals such as ChatGPT maker OpenAI to capture consumers’ imagination.
In December 2024, the streaming music leader Spotify used the tool as part of its signature “Spotify Wrapped” feature, offering each user a personalised podcast about their listening habits.
Online, users have ventured numerous guesses as to who the AI podcasters’ voices most resemble. Several have named Mr Greene, but others have mentioned former tech podcaster Leo Laporte or the comedy podcast “Armchair Expert” co-hosted by Dax Shepard and Monica Padman.
As a kid growing up in Pittsburgh, Greene idolised Lanny Frattare, the longtime voice of the city’s professional baseball team. “I would sit at Pittsburgh Pirates games and act like I was the play-by-play announcer,” he recalled.
By high school, he and two friends were doing his school’s morning announcements, which they turned into a sort of radio show.
He wrote a college application essay about his dream of one day becoming a public radio host – an essay his mum dug up and sent to him when he landed his first job at NPR in 2005.
There, Mr Greene was mentored by Don Gonyea, NPR’s longtime national political correspondent. He learned tricks of the trade, like pretending he was addressing a friend in the room, rather than a distant mass audience, so that his voice would sound conversational rather than “broadcastery.”
Feedback from listeners and interview subjects told Mr Greene his warm baritone had the power to soothe and convey trust and empathy.
On “Morning Edition,” his was the voice that some 13 million listeners woke up to from 2012 to 2020, according to NPR, making it the most popular news radio show in America.
On “Left, Right & Center,” he plays the moderate seeking common ground between pundits from the left and right.
“I truly believe that conversations have the power to change our lives and change the world,” Mr Greene said. “One of the reasons we’re in such a polarised environment right now is because people are forgetting the power of talking to one another.”
That’s what makes the feeling that Google has appropriated his voice and turned it into a robot so galling to Mr Greene.
“I read an article in the Guardian about how this podcast tool can be used to spread conspiracy theories and lend credibility to the nastier stuff in our society,” he said. “For something that sounds like me to be used in service of that was really troubling.”
Mr Greene’s lawsuit, filed last month in Santa Clara County Superior Court, alleges but does not offer proof that Google trained NotebookLM on his voice.
The complaint cites an unnamed AI forensic firm that used its software to compare the artificial voice to Mr Greene’s.
The tool gave a confidence rating of 53 per cent to 60 per cent that Mr Greene’s voice was used to train the model, which it considers “relatively high” confidence for a comparison between a real person’s voice and an artificial one. (A confidence score above zero means the voices are similar, while one below zero indicates they’re probably different.)
Mr Grimmelmann said Mr Greene doesn’t necessarily have to show definitively that Google trained NotebookLM on his voice to have a case, or even that the voice is 100 per cent identical to his.
He cited a 1988 case in which the singer and actress Bette Midler successfully sued Ford Motor Company over a commercial that used a voice actor to mimic her distinctive mezzo-soprano.
But Mr Greene would then have to show that enough listeners assume it’s Mr Greene’s voice for it to affect either his reputation or his own opportunities to capitalise on it.
Mike Pesca, host of “The Gist” podcast and a former colleague of Mr Greene’s at NPR, said he has an ear for voices and a hobby of trying to identify the actors and celebrities behind voice-overs in TV commercials.
The first time he heard NotebookLM, Mr Pesca said, “I was immediately like, ‘That’s David Greene.’”
Mr Pesca said he first assumed that Google had intentionally trained the tool on Mr Greene’s voice and that Mr Greene had been compensated.
“If I was David Greene I would be upset, not just because they stole my voice,” Mr Pesca said, but because they used it to make the podcasting equivalent of AI “slop,” a term for spammy, commodified content.
“They have banter, but it’s very surface-level, un-insightful banter, and they’re always saying, ‘Yeah, that’s so interesting.’ It’s really bad, because what do we as show hosts have except our taste in commentary and pointing our audience to that which is interesting?”
Mr Greene is not the first audio professional to complain that his voice was stolen. Numerous voice actors have been dismayed to hear voices that sound like them in various AI tools.
But they face uphill battles in court, in part because they are generally not famous figures, even if their voices are familiar, and because many voice actor contracts license their voices for a wide range of uses.
Bills introduced in several states and in Congress have sought to regulate the use of people’s voices in AI tools. Mr Greene, however, is relying on long-standing state laws that give public figures certain rights to control how their own likenesses are monetised.
Adam Eisgrau, who directs AI copyright policy for the centre-left tech trade group Chamber of Progress, said he thinks those laws are sufficient to address cases like Mr Greene’s without passing new AI laws at the national level.
“If a California jury finds that the voice of NotebookLM is fully Mr Greene’s, he may win,” Mr Eisgrau said via email.
“If they find that it’s got attributes he also possesses, but is fundamentally an archetypal anchorperson’s tone and delivery it learned from a large dataset, he may not.”
Mr Greene said he isn’t lobbying for new laws that would risk chilling innovation. He just thinks Google should have asked his permission before releasing a product based on a voice that he believes is essentially his.
“I’m not some crazy anti-AI activist,” he said. “It’s just been a very weird experience.”
© 2026 , The Washington Post