You know that text autocomplete function that makes your smartphone so convenient — and occasionally frustrating — to use? Well, now tools based on the same idea have progressed to the point that they are helping researchers to analyse and write scientific papers, generate code and brainstorm ideas.
As natural language processing has improved artificial intelligence tools have the promise of assisting with the production of research outputs. They are still however not perfect and researchers should be wary about their use. Research institutions should provide guidance about the use of AI in the conduct and reporting of research. A related issue is the degree to which a researcher is responsible if AI breaches an institutional, national or international standard. This piece published in Nature takes a dive into the issues.
LLMs are neural networks that have been trained on massive bodies of text to process and, in particular, generate language. OpenAI, a research laboratory in San Francisco, California, created the most well-known LLM, GPT-3, in 2020, by training a network to predict the next piece of text based on what came before. On Twitter and elsewhere, researchers have expressed amazement at its spookily human-like writing. And anyone can now use it, through the OpenAI programming interface, to generate text based on a prompt. (Prices start at about US$0.0004 per 750 words processed — a measure that combines reading the prompt and writing the response.)
“I think I use GPT-3 almost every day,” says computer scientist Hafsteinn Einarsson at the University of Iceland, Reykjavik. He uses it to generate feedback on the abstracts of his papers. In one example that Einarsson shared at a conference in June, some of the algorithm’s suggestions were useless, advising him to add information that was already included in his text. But others were more helpful, such as “make the research question more explicit at the beginning of the abstract”. It can be hard to see the flaws in your own manuscript, Einarsson says. “Either you have to sleep on it for two weeks, or you can have somebody else look at it. And that ‘somebody else’ can be GPT-3.”