A recent lawsuit has been filed against the startup Perplexity, alleging that it not only infringes on copyright laws but also violates trademark laws by fabricating non-existent segments of news articles and inaccurately attributing these words to various publishers.
Dow Jones, the publisher of The Wall Street Journal, and the New York Post—both part of Rupert Murdoch’s News Corp—have initiated a lawsuit for copyright infringement against Perplexity today in the US Southern District of New York.
This isn’t the first incident involving Perplexity and news publishers; just earlier this month, The New York Times issued a cease-and-desist letter to the company, claiming that it utilized the renowned newspaper’s content without proper authorization. This past summer, both Forbes and WIRED highlighted instances of Perplexity allegedly plagiarizing articles. Subsequently, both Forbes and WIRED’s parent company, Condé Nast, sent cease-and-desist letters to the company in response.
A WIRED investigation from earlier this summer, referenced in the current lawsuit, revealed that Perplexity misrepresented summaries of WIRED articles. Notably, there was an instance where it falsely asserted that WIRED had reported a California police officer involved in an uncommitted crime. Additionally, the WSJ reported today that Perplexity is in talks to secure $500 million in its forthcoming funding round, aiming for a valuation of $8 billion.
Dow Jones and the New York Post illustrate instances of Perplexity allegedly “hallucinating” fictional sections of news articles. In the realm of AI, hallucination refers to the phenomenon where generative models create false or entirely fictitious content and represent it as reality.
One example mentioned involves Perplexity Pro repeating, verbatim, two paragraphs from a New York Post article discussing US senator Jim Jordan’s exchange with European Union commissioner Thierry Breton regarding Elon Musk and X. However, it subsequently added five fabricated paragraphs regarding free speech and online regulation that were absent from the actual article.
The lawsuit alleges that this blending of fictitious paragraphs with reputable reporting and attributing it to the Post constitutes trademark dilution that may mislead readers. “Perplexity’s hallucinations, presented as genuine news and news-related content from trusted sources (using Plaintiffs’ trademarks), undermine the value of Plaintiffs’ trademarks by introducing uncertainty and mistrust into the news-gathering and publishing process, while simultaneously inflicting harm on the news-consuming public,” states the complaint.
Perplexity has not replied to inquiries for comments.
In a recent email statement to WIRED, News Corp’s chief executive Robert Thomson expressed disapproval of Perplexity in comparison to OpenAI. He stated, “We commend principled companies like OpenAI, which recognizes that integrity and creativity are vital for harnessing the potential of Artificial Intelligence.” The statement continued, “Perplexity is not the only AI company exploiting intellectual property and it certainly won’t be the last one that we will rigorously pursue. We have made it clear that our preference is to negotiate rather than litigate, but to protect our journalists, our writers, and our company, it is essential that we confront this content kleptocracy.”
OpenAI is currently dealing with its own accusations regarding trademark infringement. In the case New York Times v. OpenAI, the Times claims that ChatGPT and Bing Chat generate false quotes attributed to the Times, further accusing OpenAI and Microsoft of tarnishing its reputation through trademark dilution. An instance cited in the lawsuit details how Bing Chat purportedly stated that the Times referred to red wine (in moderation) as a “heart-healthy” food, even though it did not; the Times insists that its actual reporting has countered assertions regarding the health benefits of moderate alcohol consumption.
“Using news articles to create substitute, commercial generative AI products is illegal, as we have made clear in our correspondence with Perplexity and through our litigation against Microsoft and OpenAI,” remarked NYT director of external communications Charlie Stadtlander. “We support this lawsuit initiated by Dow Jones and the New York Post, which is a significant move toward safeguarding publisher content from this type of misuse.”
Should publishers succeed in claiming that hallucinations can breach trademark law, AI companies may encounter “immense challenges,” according to Matthew Sag, a law and artificial intelligence professor at Emory University.
“It is completely impossible to ensure that a language model will never hallucinate,” Sag explains. He believes that language models function by predicting words that appear fitting in reaction to prompts, which is inherently a form of hallucination—at times, it just seems more believable than at others.
“We refer to it as a hallucination only when it fails to align with our reality, but the mechanism is identical regardless of whether we favor the output or not.”