Many ask authors to disclose use of ChatGPT and other generative artificial intelligence
“It’s all we’ve been talking about since November,” says Patrick Franzen, publishing director for SPIE, the international society for optics and photonics. He’s referring to ChatGPT, the artificial intelligence (AI)-powered chatbot unveiled that month. In response to a prompt, ChatGPT can spin out fluent and seemingly well-informed reports, essays—and scientific manuscripts. Worried about the ethics and accuracy of such content, Franzen and managers at other journals are scrambling to protect the scholarly literature from a potential flood of manuscripts written in whole or part by computer programs.
We have explained before why we believe large language tools like ChatGPT cannot be listed as an author of research output. We have also explained why researchers need to be cautious about its use and be prepared to redraft the text it produces massively. We have also commented on the need for researchers to acknowledge if they used the tool and the need for research institutions to classify the submission of unedited text by ChatGPT without acknowledgement as a serious form of research misconduct. We have produced and uploaded the foundation for an institution’s guidance material on this subject two our patron’s area here. Institutions can gain access to this area, this resource and the growing library of resources for AUD350 per year.
When the online tool ChatGPT was made available for free public use, scientists were among those who flocked to try it out. (ChatGPT’s creator, the U.S.-based company OpenAI, has since limited access to subscribers.) Many reported its unprecedented and uncanny ability to create plausible-sounding text, dense with seemingly factual detail. ChatGPT and its brethren—including Google’s Bard, unveiled earlier this month for select users, and Meta’s Galactica, which was briefly available for public use in November 2022—are AI algorithms called large language models, trained on vast numbers of text samples pulled from the internet. The software identifies patterns and relationships among words, which allows the models to generate relevant responses to questions and prompts.
In some cases, the resulting text is indistinguishable from what people would write. For example, researchers who read medical journal abstracts generated by ChatGPT failed to identify one-third of them as written by machine, according to a December 2022 preprint. AI developers are expected to create even more powerful versions, including ones trained specifically on scientific literature—a prospect that has sent a shock wave through the scholarly publishing industry.