ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesPeer review

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Career advice: how to peer review a paper – THE (Sophie Inge | February 2018)0

Posted by Admin in on April 15, 2018

Detail, clarity and a constructive approach: all these are key to a helpful review, writes Sophie Inge

Congratulations, you’ve been invited by the editors of a prestigious journal to submit a peer review.

Like any good academic, you’ve done your homework: you’ve read the journal’s guidelines for reviewers and understand – more or less – what’s expected of you.

Now comes the hard part. In your hands, you hold the result of months – sometimes years – of hard work. Whether you think the paper is riddled with errors or a work of genius, your response needs to be careful and appropriate.

Read the rest of this discussion piece

Make reviews public, says peer review expert – Retraction Watch (Alison McCook | November 2017)0

Posted by Admin in on April 9, 2018

After more than 30 years working with scholarly journals, Irene Hameshas some thoughts on how to improve peer review. She even wrote a book about it. As the first recipient of the Publons Sentinel Award, Hames spoke to us about the most pressing issues she believes are facing the peer review system — and what should be done about them.

Retraction Watch: At a recent event held as part of this year’s Peer Review Week, you suggested that journals publish their reviews, along with the final paper. Why?

Irene Hames: I don’t think that saying something is ‘peer reviewed’ can any longer be considered a badge of quality or rigour. The quality of peer review varies enormously, ranging from excellent through poor/inadequate to non-existent. But if reviewers’ reports were routinely published alongside articles – ideally with the authors’ responses and editorial decision correspondence – this would provide not only information on the standards of peer review and editorial handling, but also insight into why the decision to publish has been made, the strengths and weaknesses of the work, whether readers should bear reservations in mind, and so on. As I’ve said before, I can’t understand why this can’t become the norm. I haven’t heard any reasons why it shouldn’t, and I’d love the Retraction Watch audience to make suggestions in the comments here. I’m not advocating that the reviewers’ names should appear – I think that’s a decision that should be left to journals and their communities.

Read the rest of this discussion piece

French National Charter for Research Integrity (Codes | 2015)0

Posted by Admin in on April 7, 2018

In the knowledge and innovation society marked by acceleration in the construction and transmission of knowledge and by international competitiveness, public higher education and research institutions and universities are in a privileged position to address current and future challenges. They are responsible for the production, transmission and utilisation of knowledge and contribute to the implementation of a qualified expertise in public decision­making processes. However, the application of this major responsibility implies consolidating trust relationship between research and society.

The French National Charter for Research Integrity clarifies the professional responsibilities ensuring a rigorous and trustworthy scientific approach, and will apply in the context of all national and international partnerships.

This Charter is well aligned with the main international texts in this field: the European Charter for Researchers (2005); the Singapore Statement on Research Integrity (2010); the European Code of Conduct for Research Integrity (ESF-ALLEA, 2011 ). The Charter falls within the reference framework put forward in the European research and innovation programme, HORIZON 2020.

Access the Charter

Algorithms Are Opinions Embedded in Code – Scholarly Kitchen (David Crotty | January 2018)0

Posted by Admin in on April 6, 2018

Recent discussions about peer review brought me back to thinking about Cathy O’Neil’s book, Weapons of Math Destruction, reviewed on this site in 2016. One of the complaints about peer review is that it is not objective — in fact, much of the reasoning behind the megajournal approach to peer review is meant to eliminate the subjectivity in deciding how significant a piece of research may be.

As algorithms play an increasing role in the design, conduct (including the collection and analysis of data), reporting and  our evaluation of research, it is essential to recognise they can be built upon values and beliefs that could distort the body of  knowledge. Often these tools can be treated as more objective than entirely human-based techniques, but sometimes not even the original coders understand how they work or the degree to which they echo very subjective attitudes.

I’m not convinced that judging a work’s “soundness” is any less subjective than judging its “importance”. Both are opinions, and how one rates a particular manuscript will vary from person to person. I often see papers in megajournals that are clearly missing important controls, but despite this, the reviewers and editor involved judged them to be sound. I’m not sure this is all that different from asking why some reviewer thought a paper was significant enough to be in Nature. Peer reviews, like letters of recommendation, are opinions.
Discussions along these lines inevitably lead to suggestions that with improved artificial intelligence (AI), we’ll reduce subjectivity through machine reading of papers and create a fairer system of peer review. O’Neil, in the TED Talk below, would argue that this is not likely to happen. Algorithms, she tells us, are not objective, true, or scientific and they do not make things fair. “That’s a marketing trick.”

Read the rest of this discussion piece