Ethics watchdogs are looking out for potentially undisclosed use of generative AI in scientific writing. But there’s no foolproof way to catch it all yet.
IN ITS AUGUST edition, Resources Policy, an academic journal under the Elsevier publishing umbrella, featured a peer-reviewed study about how ecommerce has affected fossil fuel efficiency in developing nations. But buried in the report was a curious sentence: “Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table.”
Elsevier does not prohibit the use of ChatGPT and other LLMs but does require their use to be disclosed. This story published in WIRED suggests that their undisclosed use is seeping into scientific publishing and we may not have an adequate defence. This case does beg the question, why did the peer reviewers and editors miss the obvious flag?
y sound familiar: The generative AI chatbot often prefaces its statements with this caveat, noting its weaknesses in delivering some information. After a screenshot of the sentence was posted to X, formerly Twitter, by another researcher, Elsevier began investigating. The publisher is looking into the use of AI in this article and “any other possible instances,” Andrew Davis, vice president of global communications at Elsevier, told WIRED in a statement.
Elsevier’s AI policies do not block the use of AI tools to help with writing, but they do require disclosure. The publishing company uses its own in-house AI tools to check for plagiarism and completeness, but it does not allow editors to use outside AI tools to review papers.