ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesResearch IntegrityAlgorithms Are Opinions Embedded in Code – Scholarly Kitchen (David Crotty | January 2018)

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Algorithms Are Opinions Embedded in Code – Scholarly Kitchen (David Crotty | January 2018)

Published/Released on June 19, 2018 | Posted by Admin on April 6, 2018 / , , ,

View full details | Go to resource

Recent discussions about peer review brought me back to thinking about Cathy O’Neil’s book, Weapons of Math Destruction, reviewed on this site in 2016. One of the complaints about peer review is that it is not objective — in fact, much of the reasoning behind the megajournal approach to peer review is meant to eliminate the subjectivity in deciding how significant a piece of research may be.

As algorithms play an increasing role in the design, conduct (including the collection and analysis of data), reporting and  our evaluation of research, it is essential to recognise they can be built upon values and beliefs that could distort the body of  knowledge. Often these tools can be treated as more objective than entirely human-based techniques, but sometimes not even the original coders understand how they work or the degree to which they echo very subjective attitudes.

I’m not convinced that judging a work’s “soundness” is any less subjective than judging its “importance”. Both are opinions, and how one rates a particular manuscript will vary from person to person. I often see papers in megajournals that are clearly missing important controls, but despite this, the reviewers and editor involved judged them to be sound. I’m not sure this is all that different from asking why some reviewer thought a paper was significant enough to be in Nature. Peer reviews, like letters of recommendation, are opinions.
Discussions along these lines inevitably lead to suggestions that with improved artificial intelligence (AI), we’ll reduce subjectivity through machine reading of papers and create a fairer system of peer review. O’Neil, in the TED Talk below, would argue that this is not likely to happen. Algorithms, she tells us, are not objective, true, or scientific and they do not make things fair. “That’s a marketing trick.”

Read the rest of this discussion piece

Related Reading

Resources Menu

Research Integrity

Human Research Ethics