In scholarly publishing, many policies allow authors to use generative AI when preparing submissions under specified caveats (e.g., community guidance and publisher policies). By contrast, peer-review policies commonly restrict AI use: editors and reviewers are often barred from uploading confidential manuscripts to genAI tools; some venues prohibit genAI in peer review, while others permit limited uses such as translating or editing one’s own review comments.
Restrictions aim to mitigate confidentiality risks, reduced rigor, misrepresentation of genAI outputs and peer review contributors, and potential manipulation of peer review workflows. Some publishers are exploring in-house AI tools deployed in controlled environments to protect data security and support consistent policy enforcement, including checks for retracted references, problematic statistical analyses, and non-adherence to data availability and preregistration requirements. Human attention to these issues varies, and such tools can standardize screening.
Human assessment remains central for content quality, synthesis, interpretation, and cross-domain judgment. A hybrid human+AI approach is being explored to address reviewer burden and timing. A talk at the 2025 Peer Review Congress described an NEJM AI “Fast Track” workflow issuing decisions within one week based on editor evaluation of the manuscript and two AI-generated reviews. Despite faster timelines, involving at least two human experts helps cover subject-matter and methodological expertise, surface major scientific or integrity issues, and provide safeguards against bias, conflicts, weak assessments, and other risks.
Publishers and researchers continue to test uses of AI in peer review with attention to confidentiality and integrity. PLOS emphasizes that AI can support specific checks but does not replace human expertise.
Click here to read the original press release.
More News in this Theme