Discussion around the use of ChatGPT and other AI-based large language models has grown quickly over the past few months, and it seems every field is investigating how this technology can be used to accelerate or facilitate the way they work. Scholarly communications have not been immune to the impact of these AI tools, and the publishing industry is currently working to understand and implement policies for its fair use.
With that in mind, AIP Publishing has updated its Author Policies and Ethics page to reflect its current guidance for the use of ChatGPT and similar tools in its publications. Under the updated guidance, ChatGPT and other AI-based large language models should not be listed as an author.
Just as with other instrumentation or software, the use of these language models should be disclosed to editors and reviewers — particularly if it is used to generate significant amounts of text in the manuscript. Authors should provide this information in the appropriate section of their manuscript and to the editor with their submission.
As these AI-based models can be prone to factual error and learn primarily from the work of others, all co-authors on a given paper are responsible for verifying the content in the manuscript and that the manuscript adheres to ethical and plagiarism guidelines.
The continuous development of this technology, while exciting and transformational, necessitates careful consideration of potential ethical and policy challenges. As behaviors and the tools themselves evolve, the guidelines will be revised as necessary.Click here to read the original press release.
Regulations, guidelines and other institutional frameworks
More News in this Theme