ADVERTISEMENT
Perplexity, the fast-growing AI-powered search engine, is facing fresh legal trouble as Encyclopedia Britannica and Merriam-Webster filed a lawsuit against the company in a New York federal court on September 10.
The publishers, as per media reports, allege that Perplexity has been illegally scraping their websites, copying definitions and articles verbatim, and attaching their brand names to misleading or incomplete content.
According to the complaint, Britannica and its subsidiary Merriam-Webster reportedly claim Perplexity’s “answer engine” not only plagiarizes copyrighted definitions but also misuses trademarks by presenting results under their names, potentially confusing users and damaging brand credibility. One example cited in the lawsuit highlights the word “plagiarize,” where Perplexity’s result was nearly identical to Merriam-Webster’s definition.
The suit underscores the growing tension between traditional knowledge publishers and AI-driven platforms that rely on existing databases and reference materials to train or generate responses.
This is not the first time Perplexity has been accused of copyright misuse. Earlier this year, media outlets including Forbes and Condé Nast alleged that the AI search startup was reproducing their journalism without authorization, raising wider questions about how AI companies source and present information.
The Britannica-Merriam lawsuit adds to the mounting legal challenges faced by AI platforms as regulators, publishers, and courts grapple with the balance between innovation, fair use, and intellectual property rights. Earlier this year, the British Broadcasting Corporation (BBC) accused AI search engine startup Perplexity of unlawfully using its content to train artificial intelligence models and threatened legal action.
Read more: "Google has had two years to kill perplexity and hasn't": Aravind Srinivas on Google’s AI reluctance
In a legal letter sent to Perplexity CEO Aravind Srinivas, the BBC demanded that the company stop scarping its material, erase any data already used to train its "default AI model," and submit a proposal for financial compensation to address what it described as a violation of its intellectual property rights.