
A scathing report claims that AI startup Perplexity runs a chatbot that “directly plagiarizes” articles written by news outlets like CNBC and Forbes without providing proper credit or attribution.
The issue arose with a feature called “Perplexity Pages,” which displays articles that the company “curates” by collecting details from third-party news outlets that have written about a variety of topics. Forbes reported on Friday..
Although the wording of the articles closely matches that of the sources, the curated articles do not include the names of the news publishers within the text. Instead, Perplexity includes what Forbes described as a “small logo that’s easy to miss” that links back to the original article.
In one case, Perplexity’s chatbot spit out a version of the following: Paid Forbes Exclusive Report Perplexity’s “curated” version of former Google CEO Eric Schmidt’s article about a military drone project has been viewed nearly 30,000 times and is a near-verbatim copy of the original Forbes article, even including what appear to be internal company graphics.
“Our report on Eric Schmidt’s stealth drone project was posted this morning by @perplexity_ai.” Forbes Editor-in-Chief John Paczkowski wrote about X:“This news outlet has plagiarized most of our reporting, citing us and a few people who reblogged our articles as sources, which is the easiest way to ignore it.”
Forbes identified two other instances where Perplexity Pages scraped news articles without giving proper credit. CNBC’s original report on Elon Musk’s decision Shifting shipments of advanced computer chips to his xAI startup instead of Tesla, A Bloomberg article about Apple’s plans to develop a home robot product.
In both cases, Perplexity used nearly verbatim text from the original articles without revealing any names in the copy.
Forbes, CNBC and Bloomberg did not immediately respond to requests for comment.
Perplexity AI is valued at more than $1 billion and has a list of top investors including Amazon’s Jeff Bezos, chipmaker Nvidia, and billionaire Stanley Duckenmiller. Bloomberg reported During April.
Aravind Srinivas, CEO of Perplexity AI X’s post acknowledged the problemBut the chatbot claimed to cite third-party outlets more frequently than competing services such as Google Gemini, OpenAI’s ChatGPT and Microsoft’s Copilot.
Srinivas shared a screenshot of a Perplexity post about AI-powered drones by Eric Schmidt, which featured a small hyperlink to the Forbes article near the top of the page. He also tried to differentiate Perplexity Pages from another of the company’s core products, which are essentially AI-powered chatbots.
“There are still rough edges and we are improving with more feedback,” Srinivas wrote to X. “Perplexity’s core product has most prominently featured proper source attribution from day one, unlike other chatbots on the market like ChatGPT, Gemini, and Copilot.”
“Pages and discovery will be improved, and we agree with your feedback that contributors should be easier to find and highlighted more prominently,” Srinivas added.
Forbes’ Paczkowski fired back, calling Perplexity’s actions “nothing more than plagiarism.”
“Without clear attribution, just a little logo, our work is being treated the same as a reblog. This isn’t ‘abusive’, this is theft,” he said.
Reached for comment Monday morning, a Perplexity AI spokesperson said the company had “updated how sources are presented on our page” in response to the Forbes report.
“Going forward, all sources will be visible at the top when users land on a page, as well as in the footnotes of each section,” a spokesperson said in a statement. Sources are already live on the web version of the page, and will be rolled out to the mobile version this week.
“We have always been mindful of content attribution and have designed our core product (the answers engine) from the beginning to credit source material, but most chatbots today still fail to do this reliably and prominently,” the spokesperson added.
Journalism media have frequently accused AI companies in recent months of “training” chatbots using their content without proper credit or compensation, and then using the chatbots to rip through their audiences.
As reported by The Washington Post, critics warn that increasing AI copycats could devastate news publishers unless federal regulators step in.
Last November, the News Media Alliance, a nonprofit that represents more than 2,200 publishers, including The Washington Post, warned that chatbots were stealing text to create a “plagiarized patchwork” that could violate copyright law.
More recently, Google came under fire for adding automatically generated text summaries to its search results, called “AI summaries,” and lowering links to other outlets in its search results.
Google’s AI-powered search quickly began churning out strange answers, like telling users to eat a rock or add glue to their pizza.
Users have since determined that the “pizza glue” reply was a direct quote from a tongue-in-cheek Reddit post from 10 years ago.





