The language in Nature was pretty mild as far as freakouts go. ChatGPT and other similar A.I. tools, the editors wrote, threaten “the transparency and trust-worthiness that the process of generating knowledge relies on … ultimately, research must have transparency in methods, and integrity and truth from authors.” The editor of Nature’s chief rival, Science, similarly blew his stack in a most genteel manner: “An AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works,” he wrote.
Is there another piece talking about the impact of ChatGPT. This piece looks at how the artificial intelligence system is laying bare the serious problems in scientific work and in the research publication spheres. It didn’t create these problems, but it is certainly highlighting the need for disruptive change and it happening soon. The shift we need is to move away from rewarding quantity and focusing more on quality. We need systems that can detect if a research output wasn’t written by a human and research misconduct processes that penalise academics who submit the work of artificial intelligence, claiming it is their own work
All this pearl-clutching isn’t the result of a hypothetical problem. Just a few days earlier, Nature’s journalists had covered how ChatGPT-written abstracts were science-y enough to fool fellow scientists, and, worse, that A.I.-co-authored articles were already working their way into the peer-reviewed literature. Just as university teachers have already started finding ChatGPT-written essays in the wild and journalists have been discovering A.I.-written news articles, scientific journals suddenly realized that the abstract threat of machine-written “research” articles was already becoming very, very concrete. It seems like we’re just weeks away from a tsunami of fake papers hitting the peer-reviewed literature, swamping editors and drowning the scientific process in an ocean of garbage.
The journals are absolutely right to worry; ChatGPT and, presumably, its A.I. successors yet to come represent a potential existential threat to the peer review process—a fundamental mechanism that governs how modern science is done. But the nature of that challenge isn’t fundamentally about the recent, rapid improvement in A.I. mimicry as much as it is about a much slower, more insidious disease at the heart of our scientific process—the same problem that makes A.I. such a threat to university teaching and to journalism.