
The intersection of artificial intelligence and personal identity has reached another legal flashpoint as David Greene, the former host of NPR’s Morning Edition, has filed a lawsuit against Google. The complaint, filed in Santa Clara County Superior Court, alleges that the tech giant’s AI-powered research tool, NotebookLM, utilizes a male voice in its "Audio Overviews" feature that illegally replicates Greene’s distinct vocal persona without his permission or compensation.
This high-profile legal action marks a significant moment in the ongoing debate over "synthetic media" and the rights of creators. It echoes the recent controversy involving Scarlett Johansson and OpenAI, further pressurizing the tech industry to define the ethical and legal boundaries of voice synthesis. For the AI community, Greene’s lawsuit is not just about one man’s voice; it is a litmus test for how specific vocal "styles" and "cadences"—rather than just raw audio recordings—are protected under the law.
According to the lawsuit, Greene was unaware of NotebookLM until the fall of 2024, when a former colleague reached out to ask if he had licensed his voice to Google. The colleague noted that the male host in the tool’s viral "Audio Overviews" feature—which generates conversational podcasts between two AI agents based on user-uploaded documents—sounded "very much like" Greene.
Upon listening to the generated audio, Greene describes being "completely freaked out." The complaint details that the AI voice did not merely sound like a generic male broadcaster but captured the specific nuances of Greene’s delivery honed over nearly 13 years at NPR. These nuances allegedly include his unique sentence pacing, intonation, and even specific verbal ticks such as the "uhhs" and "likes" that Greene claims are part of his signature broadcasting style.
"It’s this eerie moment where you feel like you’re listening to yourself," Greene stated in an interview following the filing. "My voice is, like, the most important part of who I am." The lawsuit asserts that the resemblance was strong enough to fool close friends and even Greene’s wife, suggesting that the AI model may have been trained on the vast amount of public audio data available from his tenure at NPR and his current role at KCRW.
The lawsuit accuses Google of violating Greene’s right of publicity under California law. Unlike copyright, which protects creative works, the right of publicity protects an individual from having their name, image, or voice used for commercial purposes without consent. Greene’s legal team, led by Joshua Michelangelo Stein of Boies Schiller Flexner, argues that Google has effectively "stolen" Greene’s professional identity to humanize its AI product.
The complaint alleges that:
Google has swiftly dismissed the allegations as "baseless." In a statement, Google spokesperson José Castañeda clarified that the voice in question was not a clone of any specific individual but was based on recordings from a paid professional actor.
"The sound of the male voice in NotebookLM's Audio Overviews is based on a paid professional actor Google hired," Castañeda stated. Google’s defense relies on the argument that while the voice may have a "podcast-style" cadence—which Greene helped popularize—it is not a digital replica of Greene himself. This defense is similar to the one used by OpenAI when they claimed their "Sky" voice was not Scarlett Johansson, but rather a different actress with a naturally similar timbre.
However, legal experts note that California’s right of publicity laws can be broad. If a jury finds that the voice is "sound-alike" enough to cause confusion or implies an endorsement, Google could still be liable, regardless of whether a different actor was used as the base. The famous 1988 case Midler v. Ford Motor Co. established that a voice is as distinctive as a face, and imitating it for commercial gain can be actionable.
To understand the specific points of contention, we have broken down the opposing narratives below.
Comparison of Claims in Greene v. Google
| Feature/Aspect | David Greene's Allegation | Google's Defense |
|---|---|---|
| Voice Origin | Likely trained on years of NPR archives without consent. | Derived from a specific, paid professional voice actor. |
| Vocal Traits | Matches unique cadence, pitch, and specific "ticks" (e.g., "uhhs"). | Generic "podcast host" style; similarities are coincidental. |
| Public Perception | Friends, family, and colleagues identified the voice as Greene. | No intent to mimic; no confusion intended. |
| Legal Basis | Violation of Right of Publicity and Misappropriation of Identity. | Baseless claims; the voice actor is a distinct individual. |
| Desired Outcome | Damages and an injunction to stop using the voice. | Dismissal of the suit; continued operation of the feature. |
This lawsuit arrives less than two years after the high-profile dispute between Scarlett Johansson and OpenAI. in that instance, Johansson refused to license her voice for ChatGPT, only for the company to release a voice named "Sky" that sounded remarkably similar. OpenAI ultimately paused the use of the voice following the backlash, though they maintained it was not an imitation.
The Greene lawsuit differs in that Greene is a journalist whose voice is his primary professional asset, rather than a Hollywood actor known for visual roles. This distinction is crucial; for a broadcaster, a synthetic clone is a direct competitor. If an AI can generate a "David Greene-style" narration for any article or document, the market demand for the actual David Greene could theoretically diminish.
Industry analysts at Creati.ai suggest this case could set a vital precedent for the "style" of delivery. While copyright does not typically protect a "style" (you cannot copyright a genre of music, for instance), the Right of Publicity creates a shield for personal identity. The question for the Santa Clara court will be: Does a "public radio voice" belong to the genre, or does it belong to the man?
NotebookLM has been one of Google’s surprising success stories in the AI space. Powered by the Gemini 1.5 Pro model, it allows users to upload PDFs, text files, and other sources, which the AI then "reads" and synthesizes. The "Audio Overview" feature takes this a step further by generating a scripted dialogue between two AI hosts—one male, one female—who discuss the material in a casual, banter-filled format.
The success of the feature lies in its hyper-realistic prosody. The AI hosts interrupt each other, use filler words, change pitch to express skepticism or excitement, and "breathe" between sentences. It is precisely this high-fidelity realism that has triggered the lawsuit. Greene argues that the male host’s specific method of expressing curiosity—a rising inflection at the end of sentences combined with a warm, lower-register timber—is a unique attribute of his "Morning Edition" persona.
As we cover at Creati.ai, the outcome of Greene v. Google could reshape the development of synthetic voice agents. If the courts rule in Greene's favor, AI companies may need to implement stricter "negative checks" to ensure their voices do not accidentally resemble famous figures.
Potential impacts include:
For now, the NotebookLM voice remains active, and Google shows no sign of retracting the feature. As the case moves toward discovery, the tech world will be watching to see if the "Audio Overview" hosts will be silenced—or if the definition of who owns a "voice" will be rewritten for the algorithmic age.