The New York Times has ended its association with independent book reviewers due to “serious violations,” including the use of AI tools.
In a review published on January 6th, journalist and author Alex Preston was accused of producing content that closely mirrored reviews from far-left publications, particularly one from The Guardian last August.
The Times conducted an investigation and found that Preston admitted to using an AI tool for his manuscript and didn’t fully grasp the content he was working with. This admission came during a conversation with the publication before the review was released. A report from The Wrap noted this misconduct.
Preston claimed to The Times that he did not use AI for other articles. However, an editor’s note was added on Monday, addressing the remarkable similarities and emphasizing the AI issue in Preston’s review.
In the editor’s note dated March 30th, it was highlighted that a reader pointed out the overlap between Preston’s review and The Guardian’s assessment of the same book. Upon inquiry, Preston acknowledged he incorporated elements from the Guardian review but failed to edit them out. The reliance on AI and the inclusion of uncredited work violated the paper’s standards. Other reviewers for The Times also commented that they had no issues with their previous work.
Preston issued a statement to The Wrap, admitting he “improperly used an AI editing tool” and overlooked the repeated phrasing in his draft. He took full responsibility and expressed regret for the situation.
This is a person who has authored six books and numerous reviews. Now, this incident will permanently linger in his online presence. It seems overly harsh for what could be viewed as a genuine mistake. Given Preston’s experience, it feels unlikely he would intentionally deceive anyone. There’s also a shared responsibility from the Times’ editors. Aren’t there ways to detect potential plagiarism?
As of now, AI has mostly been used in basic applications like transcribing videos and podcasts. The notion that this rudimentary technology might evolve into something much more advanced feels a bit far-fetched to me.
It’s interesting to see The Times so focused on handling AI-related issues. Yet, it makes one wonder—if a reporter publishes false information, will they be held to the same rigorous standards? The situation in the books section might appear stable, but the news side still struggles with misinformation.





