Science and Research Content

Study reveals risks to peer review integrity from AI-generated feedback -

A new study has revealed that large language models (LLMs) such as ChatGPT and Claude can be used to generate biased peer reviews that are difficult to distinguish from those written by humans, presenting a significant threat to the credibility of scholarly publishing. The peer review process remains essential for validating the quality and accuracy of scientific research, and the potential for manipulation using generative AI introduces a pressing concern for academic integrity.

Conducted by researchers in China, the study examined how LLMs could be deployed to simulate peer review functions across 20 real cancer research manuscripts. These manuscripts were obtained from the journal eLife, utilizing its transparent peer review model to access the original, pre-publication versions submitted for review. The AI was instructed to carry out tasks typically assigned to human reviewers: drafting standard peer review reports, recommending manuscript rejection, and suggesting citations—including references that were not relevant to the subject matter.

The findings indicated that current AI detection tools were largely inadequate. One such detector misidentified over 80% of the AI-generated reviews as human-written. While the AI's standard evaluations lacked the nuanced insight of expert reviewers, it produced highly convincing rejection commentary and rationales for including unrelated citations. These capabilities raise concerns about potential misuse by individuals seeking to manipulate the peer review process to disadvantage certain submissions or artificially inflate citation counts.

Despite these risks, the study also identified a constructive application of the technology. The same LLM demonstrated proficiency in generating rebuttals to unfounded citation requests, potentially offering authors a mechanism to counteract biased or manipulative feedback.

The study’s authors stress the urgent need for the research community to implement explicit guidelines and oversight frameworks that govern the ethical use of generative AI. They underscore that such technologies must be directed toward reinforcing, rather than compromising, the trust and accountability essential to academic publishing.

Click here to read the original press release.

sponsor links

For banner ads click here