The release of the ChatGPT tool by OpenAI in San Francisco, towards the end of 2022, has stimulated a tidal wave of engagement across society, with a spectrum of emerging use-cases being put forward, both positive and potentially nefarious in nature.
There are already a range of applications of AI in use within the research and publishing sector, with applications including, amongst others, language editing, screening for potential research integrity issues, simulated student essays, summaries of research papers, and the use of AI tools as de facto research assistants.
There is understandable concern amongst academic institutions, researchers, academic editors, and publishers that individuals or groups may try to pass off AI-generated work as their own original research, also resulting in incomplete and unreliable outputs. In light of these developments, the publishing sector is stepping forward with policies and guidance in relation to the use of AI tools within the authoring and publication process. While industry standards will likely coalesce in the coming months, Emerald is adopting the following approach, as part of a watching brief on developments in the rapidly evolving AI space.
Firstly, because these tools cannot take accountability for such work, AI tools/large language models cannot be credited with authorship of any Emerald publication. Secondly, any use of AI tools within the development of an Emerald publication must be flagged by the author(s) within the paper, chapter, or case study. Emerald is updating its author and editor advice accordingly, and these policies come into effect immediately.
Click here to read the original press release.
More News in this Theme