ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesPeer review

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

China strengthens its campaign against scientific misconduct – CE&EN (Hepeng Jia | September 2019)0

Posted by Admin in on September 21, 2019
 

New publishing standards aim for clarity on plagiarism, fabrication, and authorship

Amid increasing attention to scientific research integrity in China, the country has adopted a new set of standards to more clearly define misconduct in publishing journal articles. Experts hope the new clarity will make it easier to discipline researchers who violate the standards.

The State Administration of Press and Publication, the agency in charge of China’s publishing sector, released and adopted in July the Academic Publishing Specification—Definition of Academic Misconduct for Journals. Other standards developed by the agency cover citation and translation practices and the use of ancient Chinese.

The publishing specification defines and distinguishes plagiarism, fabrication, and falsification. It also addresses inappropriate authorship, duplicate or multiple submissions, and overlapping publications.

What’s next for Registered Reports? – Nature (Chris Chambers | September 2019)0

Posted by Admin in on September 19, 2019
 

Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.

What part of a research study — hypotheses, methods, results, or discussion — should remain beyond a scientist’s control? The answer, of course, is the results: the part that matters most for publishing in prestigious journals and advancing careers. This paradox means that the careful scepticism required to avoid massaging data or skewing analysis is pitted against the drive to identify eye-catching outcomes. Unbiased, negative and complicated findings lose out to cherry-picked highlights that can bring prominent articles, grant funding, promotion and esteem.

The ‘results paradox’ is a chief cause of unreliable science. Negative, or null, results go unpublished, leading other researchers into unwittingly redundant studies. Ambiguous or otherwise ‘unattractive’ results are airbrushed (consciously or not) into publishable false positives, spurring follow-up research and theories that are bound to collapse.

Clearly, we need to change how we evaluate and publish research. For the past six years, I have championed Registered Reports (RRs), a type of research article that is radically different from conventional papers. The 30 or so journals that were early adopters have together published some 200 RRs, and more than 200 journals are now accepting submissions in this format (see ‘Rapid rise’). When it launched in 2017, Nature Human Behaviour became the first of the Nature journals to join this group. In July, it published its first two such reports1. With RRs on the rise, now is a good time to take stock of their potential and limitations

Read the rest of this discussion piece

Elsevier investigates hundreds of peer reviewers for manipulating citations – Nature (Dalmeet Singh Chawla | September 2019)0

Posted by Admin in on September 17, 2019
 

The publisher is scrutinizing researchers who might be inappropriately using the review process to promote their own work.

This week is peer review week, which is a good time to reflect on the professional development your institution provides on peer review.  Hopefully, it includes warning against reviewers directing reviewed authors to cite their work.  This case is a good example of why.

The Dutch publisher Elsevier is investigating hundreds of researchers whom it suspects of deliberately manipulating the peer-review process to boost their own citation numbers.
.

The publisher is looking into the possibility that some peer reviewers are encouraging the authors of work under review to cite the reviewers’ own research in exchange for positive reviews — a frowned-on practice broadly termed coercive citation.
.

Elsevier’s probe has also revealed that several of these reviewers seem to be engaging in other questionable publishing practices in studies that they have themselves authored. The Elsevier analysts who uncovered the activity told Nature that they “discovered clear evidence of peer-review manipulation” and of academics publishing the same studies more than once. Elsevier said that their investigations will lead to some of these studies being retracted.
.

Read the rest of this discussion piece

Why we shouldn’t take peer review as the ‘gold standard’ – The Washington Post (Paul D. Thacker and Jon Tennant | August 2019)0

Posted by Admin in on September 10, 2019
 

It’s too easy for bad actors to exploit the process and mislead the public

In July, India’s government dismissed a research paper finding that the country’s economic growth had been overestimated, saying the paper had not been “peer reviewed.” At a conference for plastics engineers, an economist from an industry group dismissed environmental concerns about plastics by claiming that some of the underlying research was “not peer reviewed.” And the Trump administration — not exactly known for its fealty to science — attempted to reject a climate change report by stating, incorrectly, that it lacked peer review.

Researchers commonly refer to peer review as the “gold standard,” which makes it seem as if a peer-reviewed paper — one sent by journal editors to experts in the field who assess and critique it before publication — must be legitimate, and one that’s not reviewed must be untrustworthy. But peer review, a practice dating to the 17th century, is neither golden nor standardized. Studies have shown that journal editors prefer reviewers of the same gender, that women are underrepresented in the peer review process, and that reviewers tend to be influenced by demographic factors like the author’s gender or institutional affiliation. Shoddy work often makes it past peer reviewers, while excellent research has been shot down. Peer reviewers often fail to detect bad research, conflicts of interest and corporate ghostwriting.

Meanwhile, bad actors exploit the process for professional or financial gain, leveraging peer review to mislead decision-makers. For instance, the National Football League used the words “peer review” to fend off criticism of studies by the Mild Traumatic Brain Injury Committee, a task force the league founded in 1994, which found little long-term harm from sport-induced brain injuries in players. But the New York Times later discovered that the scientists involved had omitted more than 100 diagnosed concussions from their studies. What’s more, the NFL’s claim that the research had been rigorously vetted ignored that the process was incredibly contentious: Some reviewers were adamant that the papers should not have been published at all.

Read the rest of this discussion piece

0