As the rate and volume of academic publications has risen, so too has the pressure on journal editors to quickly find reviewers to assess the quality of academic work. In this context the potential of Artificial Intelligence (AI) to boost productivity and reduce workload has received significant attention. Drawing on evidence from an experiment utilising AI to learn and assess peer review outcomes, Alessandro Checco, Lorenzo Bracciale, Pierpaolo Loreti, Stephen Pinfield, and Giuseppe Bianchi, discuss the prospects for AI for assisting peer review and the potential ethical dilemmas its application might produce.
The use of AI in peer review has the potential to ease the load on overworked reviewers. But can it be fair, just and free of bias? This great LSE Impact Blog piece delves into the challenge. Like many such discussions, it’s not so much about the algorithms it is about bias in the previous practice and training data. Our thanks to Julie Simpson for posting to Twitter
Fig.1 Stages of the Peer Review process.
Rather than more grandiose visions of replacing human decision-making entirely, we are interested in understanding the extent to which AI might assist reviewers and authors in dealing with this burden. Giving rise to the question: can we use AI as a rudimentary tool to model human reviewer decision making?
Experimenting with AI peer review
To test this proposition, we trained a neural network using a collection of submitted manuscripts of engineering conference papers, together with their associated peer review decisions.