ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesNews

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

For problematic papers, don’t retract or correct, say publishing experts: Amend – Retraction Watch (Alison McCook | April 2017)0

Posted by Admin in on June 9, 2017
 

A group of publishing experts have proposed a somewhat radical idea: Instead of retracting papers, or issuing corrections that address problems, authors should amend published articles. Here’s how it would work – any post-publication changes would be added as amendments labeled “insubstantial,” “substantial,” or “complete” (equivalent to a retraction). Is this a better way? We spoke with authors of a preprint in BioRxiv — Virginia Barbour, chair of the Committee on Publication Ethics (COPE); Theodora Bloom, executive editor of The BMJ; Jennifer Lin, director of product management at Crossref; and Elizabeth Moylan, senior editor of research integrity at BioMed Central.

Retraction Watch: Why do you think it’s a good idea to amend articles, rather than issue formal retractions or corrections?

Authors: We think there are two main issues that mean the current types of correction and retraction don’t serve the scientific community well.

Read the rest of this interview

Why do researchers commit misconduct? A new preprint offers some clues – Retraction Watch (Ivan Oransky | April 2017)0

Posted by Admin in on June 7, 2017
 

“Why Do Scientists Fabricate And Falsify Data?” That’s the start of the title of a new preprint posted on bioRxiv this week by researchers whose names Retraction Watch readers will likely find familiar. Daniele Fanelli, Rodrigo Costas, Ferric Fang (a member of the board of directors of our parent non-profit organization), Arturo Casadevall, and Elisabeth Bik have all studied misconduct, retractions, and bias. In the new preprint, they used a set of papers from PLOS ONE shown in earlier research to have included manipulated images to test what factors were linked to such misconduct. The results confirmed some earlier work, but also provided some evidence contradicting previous findings. We spoke to Fanelli by email.

Retraction Watch (RW): This paper builds on a previous study by three of your co-authors, on the rate of inappropriate image manipulation in the literature. Can you explain how it took advantage of those findings, and why that was an important data set?

Daniele Fanelli (DF): The data set in question is unique in offering a virtually unbiased proxy of the rate of scientific misconduct. Most data that we have about misconduct comes either from anonymous surveys or from retracted publications. Both of these sources have important limitations. Surveys are by definition reports of what people think or admit to have done, and usually come from a self-selected group of voluntary respondents. Retractions result from complex sociological processes and therefore their occurrence is determined by multiple uncontrollable factors, such as the policies of retracting journals, the policies of the country in which authors are working, the level of scrutiny that a journal or a field is subject to, the willingness of research institutions to cooperate in investigations, etc.

Read the rest of this interview

What leads to bias in the scientific literature? New study tries to answer – Retraction Watch (Alison McCook | March 2017)0

Posted by Admin in on June 2, 2017
 

By now, most of our readers are aware that some fields of science have a reproducibility problem. Part of the problem, some argue, is the publishing community’s bias toward dramatic findings — namely, studies that show something has an effect on something else are more likely to be published than studies that don’t.

A thought provoking Retraction Watch reflection on what really is fuelling the amount of research misconduct and scientific bias that occurs, including questioning whether the ‘pressure to publish’ is really at fault.

Many have argued that scientists publish such data because that’s what is rewarded — by journals and, indirectly, by funders and employers, who judge a scientist based on his or her publication record. But a new meta-analysis in PNAS is saying it’s a bit more complicated than that.
.
In a paper released today, researchers led by Daniele Fanelli and John Ioannidis — both at Stanford University — suggest that the so-called “pressure-to-publish” does not appear to bias studies toward larger so-called “effect sizes.” Instead, the researchers argue that other factors were a bigger source of bias than the pressure-to-publish, namely the use of small sample sizes (which could contain a skewed sample that shows stronger effects), and relegating studies with smaller effects to the “gray literature,” such as conference proceedings, PhD theses, and other less publicized formats.
.

Read the rest of this discussion piece
Other items of Daniele Fanelli’s work appears in this library

Redefine misconduct as distorted reporting – Nature: World View Column (Daniele Fanelli | 2013)0

Posted by Admin in on June 2, 2017
 

To make misconduct more difficult, the scientific community should ensure that it is impossible to lie by omission, argues Daniele Fanelli.

Against an epidemic of false, biased and falsified findings, the scientific community’s defences are weak. Only the most egregious cases of misconduct are discovered and punished. Subtler forms slip through the net, and there is no protection from publication bias.

Delegates from around the world will discuss solutions to these problems at the 3rd World Conference on Research Integrity (wcri2013.org) in Montreal, Canada, on 5–8 May. Common proposals, debated in Nature and elsewhere, include improving mentorship and training, publishing negative results, reducing the pressure to publish, pre-registering studies, teaching ethics and ensuring harsh punishments.

Read the rest of this discussion piece

0