ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesNews

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

What leads to bias in the scientific literature? New study tries to answer – Retraction Watch (Alison McCook | March 2017)0

Posted by Admin in on June 2, 2017
 

By now, most of our readers are aware that some fields of science have a reproducibility problem. Part of the problem, some argue, is the publishing community’s bias toward dramatic findings — namely, studies that show something has an effect on something else are more likely to be published than studies that don’t.

A thought provoking Retraction Watch reflection on what really is fuelling the amount of research misconduct and scientific bias that occurs, including questioning whether the ‘pressure to publish’ is really at fault.

Many have argued that scientists publish such data because that’s what is rewarded — by journals and, indirectly, by funders and employers, who judge a scientist based on his or her publication record. But a new meta-analysis in PNAS is saying it’s a bit more complicated than that.
.
In a paper released today, researchers led by Daniele Fanelli and John Ioannidis — both at Stanford University — suggest that the so-called “pressure-to-publish” does not appear to bias studies toward larger so-called “effect sizes.” Instead, the researchers argue that other factors were a bigger source of bias than the pressure-to-publish, namely the use of small sample sizes (which could contain a skewed sample that shows stronger effects), and relegating studies with smaller effects to the “gray literature,” such as conference proceedings, PhD theses, and other less publicized formats.
.

Read the rest of this discussion piece
Other items of Daniele Fanelli’s work appears in this library

Redefine misconduct as distorted reporting – Nature: World View Column (Daniele Fanelli | 2013)0

Posted by Admin in on June 2, 2017
 

To make misconduct more difficult, the scientific community should ensure that it is impossible to lie by omission, argues Daniele Fanelli.

Against an epidemic of false, biased and falsified findings, the scientific community’s defences are weak. Only the most egregious cases of misconduct are discovered and punished. Subtler forms slip through the net, and there is no protection from publication bias.

Delegates from around the world will discuss solutions to these problems at the 3rd World Conference on Research Integrity (wcri2013.org) in Montreal, Canada, on 5–8 May. Common proposals, debated in Nature and elsewhere, include improving mentorship and training, publishing negative results, reducing the pressure to publish, pre-registering studies, teaching ethics and ensuring harsh punishments.

Read the rest of this discussion piece

Nightmare scenario: Text stolen from manuscript during review – Retraction Watch (Victoria Stern | March 2017)0

Posted by Admin in on May 27, 2017
 

A food science journal has retracted a paper over “a breach of reviewer confidentiality,” after editors learned it contained text from an unpublished manuscript — which one of the authors appears to have reviewed for another journal.

This awful case highlights the importance of professional development relating to the conduct of peer reviews and underlines the importance of good communication between collaborating researchers.

The publisher and editors-in-chief of the Journal of Food Process Engineering became aware of the breach when the author of the unpublished manuscript lodged a complaint that his paper, under review at another journal, had been plagiarized by the now retracted paper.
.
We’re hazy on a few details in this case. Although the journal editor told us the “main author” of the retracted paper reviewed the original manuscript for another journal, the corresponding author of the retracted paper said he was not to blame. (More on that below.)
.

Read the rest of this news story

Some Social Scientists Are Tired of Asking for Permission – The New York Times (Kate Murphy | May 2017)0

Posted by Admin in on May 25, 2017
 

Sometimes a change to national policy isn’t enough to alter institutional practice – especially when that practice has been entrenched for a few decades and is wrapped in institutional risk. This New York Times story highlights why there’s so much chatter around the change to the US ‘Common Rule’.

If you took Psychology 101 in college, you probably had to enroll in an experiment to fulfill a course requirement or to get extra credit. Students are the usual subjects in social science research — made to play games, fill out questionnaires, look at pictures and otherwise provide data points for their professors’ investigations into human behavior, cognition and perception.
.
But who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.
.
The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.
.

Read the rest of this discussion piece

0