Researchers cannot always differentiate between AI-generated and original abstracts.
An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.
Continuing with our recent discussion about ChatGPT, Natural Language Processing (NLP) and artificial intelligence in research outputs, this piece looks at the degree to which abstracts written by a machine are fooling academics into believing that they were written by humans. The process of the development of this technology is startling. Research institutions, publishers, funding bodies and learned societies need to establish policies, and guidance material and conduct professional development in this area. AHRECS is currently working on a guidance document that we will post to the subscribers’ area in the next few days.
The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.
Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.