Cambridge University Press has launched its AI ethics policy to help researchers use generative AI tools, such as ChatGPT, while upholding academic standards around transparency, plagiarism, accuracy, and originality. The guidelines include a ban on AI being treated as an author of academic papers and books published by Cambridge.
This move provides clarity to academics amid concerns about flawed or misleading use of powerful large language models in research, alongside excitement about its potential.
The Cambridge principles for generative AI in research publishing include that AI must be declared and clearly explained in publications, AI does not meet the Cambridge requirements for authorship, any use of AI must not breach Cambridge's plagiarism policy, and authors are accountable for the accuracy, integrity and originality of their research papers.
Cambridge publishes tens of thousands of research papers in more than 400 peer-reviewed journals and 1,500 research monographs, reference works, and higher education textbooks each year. With the launch of this AI ethics policy, Cambridge University Press hopes to help its academic community navigate the potential biases, flaws, and compelling opportunities of AI.Click here to read the original press release.
Regulations, guidelines and other institutional frameworks
More News in this Theme