Some publishers also banning use of bot in preparation of submissions but others see its adoption as inevitable
The publishers of thousands of scientific journals have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.
AHRECS agrees with this move by academic publishers. As we understand it the artificial intelligence systems currently available, such as ChatGPT are not general artificial intelligence (AGI). We have observed recently, the current systems do not genuinely understand its interaction with humans or even the text that it produces. As such, it can not take responsibility for what it produces, nor can it be held accountable when it breaches responsible research standards (such as reusing the text written by others without attribution). Institutions must also provide useful guidance in this space. AHRECS has published a foundation guidance document relating to ChatGPT and research outputs to our subscribers’ (ahrecs.vip). This document is Creative Commons, Attribution. A subscription costs an institution $350 per year
But while the chatbot has proved a huge source of fun – its take on how to free a peanut butter sandwich from a VCR, in the style of the King James Bible, is one notable hit – the program can also produce fake scientific abstracts that are convincing enough to fool human reviewers.
ChatGPT’s more legitimate uses in article preparation have already led to it being credited as a co-author on a handful of papers.
The sudden arrival of ChatGPT has prompted a scramble among publishers to respond. On Thursday, Holden Thorp, the editor-in-chief of the leading US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author.