Last week, an environmental journal published a paper on the use of renewable energy in cleaning up contaminated land. To read it, you would have to pay 40 euros. But you still wouldn’t know for sure who wrote it.
The use of artificial intelligence systems, such as ChatGPT in the writing of research outputs with disclosure is a significant concern. Not least, because they do not genuinely understand their instructions, the topic or the text they produce. There is also a good chance that the material will be plagiarised if only compression plagiarism. Institutions, research funding bodies, publications and learned societies need to provide researchers with clear guidance on the use of AI systems such as LLMs in research outputs.
“Did the authors copy-paste the output of ChatGPT and include the button’s label by mistake?” wondered Guillaume Cabanac, a professor of computer science at the University of Toulouse, in France, in a comment on PubPeer.
And, he added, “How come this meaningless wording survived proofreading by the coauthors, editors, referees, copy editors, and typesetters?”
The case is the latest example of a growing trend of sloppy, undeclared use of ChatGPT in research. So far, Cabanac, whose work was covered in Nature last month, has posted more than 30 papers on PubPeer that contain those two telltale, free-floating words. And that’s not including articles that appear in predatory journals, the scientific sleuth told Retraction Watch.