Recent discussions about peer review brought me back to thinking about Cathy O’Neil’s book, Weapons of Math Destruction, reviewed on this site in 2016. One of the complaints about peer review is that it is not objective — in fact, much of the reasoning behind the megajournal approach to peer review is meant to eliminate the subjectivity in deciding how significant a piece of research may be.
As algorithms play an increasing role in the design, conduct (including the collection and analysis of data), reporting and our evaluation of research, it is essential to recognise they can be built upon values and beliefs that could distort the body of knowledge. Often these tools can be treated as more objective than entirely human-based techniques, but sometimes not even the original coders understand how they work or the degree to which they echo very subjective attitudes.
.
Discussions along these lines inevitably lead to suggestions that with improved artificial intelligence (AI), we’ll reduce subjectivity through machine reading of papers and create a fairer system of peer review. O’Neil, in the TED Talk below, would argue that this is not likely to happen. Algorithms, she tells us, are not objective, true, or scientific and they do not make things fair. “That’s a marketing trick.”
.