Science and Research Content

Science funding agencies reject AI for peer review -

In a recent turn of events, science funding agencies, including the National Institutes of Health (NIH) and the Australian Research Council (ARC), have decided to prohibit the use of artificial intelligence (AI) tools for peer review. The decision came after concerns were raised about the potential risks and drawbacks associated with employing AI-generated critiques for evaluating research proposals.

The AI tool in question, ChatGPT, gained popularity soon after its release in November 2022. Researchers lauded its ability to expedite the peer review process by quickly drafting critiques based on information extracted from the research proposals. Reviewers found that they could paste segments of the proposals, such as abstracts, aims, and research strategies, into the AI, saving them valuable time and effort.

However, the enthusiasm for AI-assisted peer review was met with skepticism from science funding agencies. On June 23, NIH took a firm stance and banned the use of generative AI tools, like ChatGPT, for analyzing and formulating peer-review critiques. The decision was partly influenced by a letter from neuroscientist Greg Siegle and his colleagues at the University of Pittsburgh, who warned the agency about the potential dangers of relying on AI-generated reviews, considering it a risky precedent.

Following suit, ARC also imposed a similar ban on July 7 after discovering reviews that seemed to be written by ChatGPT. Confidentiality emerged as a primary concern for the funding agencies, as inputting parts of a proposal into an AI tool means sharing sensitive information that could become part of the tool's training data.

Critics of AI-driven peer review point out various issues. They fear that AI-written reviews may contain errors, as AI models have been known to generate fabricated information. Moreover, the AI's reliance on existing data might introduce bias against unconventional ideas, hindering the promotion of innovative scientific perspectives. The lack of originality and creativity in AI-generated reviews has also raised concerns, with some suggesting that it could even lead to plagiarism.

For scientific publishers, AI-generated reviews present another challenge—reviewer accountability. It becomes challenging to ensure that the reviewer understands and agrees with the content they provide when the process is facilitated by AI.

However, some researchers argue that AI has the potential to enhance the peer-review process. Psychiatric geneticist Jake Michaelson from the University of Iowa believes that AI tools could help reviewers check for overlooked aspects in proposals, assess work outside their field of expertise, and improve the clarity of their critiques. He envisions a future where AI serves as the first line of peer review, with human experts complementing and validating its assessments.

While the current stance of funding agencies is clear, the landscape may evolve over time. Some scientists have pointed out that certain generative AI models work offline, alleviating concerns about confidentiality violations. NIH has indicated that it plans to provide additional guidance in this rapidly evolving area.

The decision by science funding agencies to disallow the use of AI for peer review highlights the cautionary approach adopted by these organizations. While AI holds promise for improving the review process, the current concerns about confidentiality, potential bias, and lack of creativity warrant careful evaluation before embracing AI as a mainstay in scientific peer review.

Click here to read the original press release.

sponsor links

For banner ads click here