ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesPublication ethics

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Understanding the complexities of retractions (Amy Riegelman and Caitlin Bakker | January 2018)0

Posted by Admin in on February 19, 2018

Recommended resources

Reasons for retracted publications range from honest errors made by authors or publishers to research misconduct (e.g., falsified data, fraudulent peer review). A retraction represents a status change of a publication in the scholarly literature. Other examples of status changes include correction or erratum. A retraction could be initiated by many parties, including authors, institutions, or journal editors. The U.S. National Library of Medicine annually reports on the number of retracted publications indexed within PubMed. While the overall rate of retractions is still very small, retractions have increased considerably in the last decade from 97 retracted articles in 2006 to 664 in 2016.1

Quite simply an excellent resource that we urge institutions to include in you research integrity resource library and all ECRs to read/keep for ongoing reference.

As librarians help users navigate research platforms and maintain awareness of publication status changes, it is important to understand both the publishing and discovery landscape. Guidelines exist to help publishers and platforms identify retractions, but a recent study found inconsistent representations of retractions across various platforms.2 Another consideration is when scholars export citations or full-text articles out of various discovery platforms to personal libraries (e.g., Mendeley, DropBox).
Philip Davis studied retracted articles residing in personal libraries and nonpublisher websites. Among the findings, Mendeley libraries contained many retracted articles, and Davis concluded that this decentralized access without automated status updates “may come with the cost of promoting incorrect, invalid, or untrustworthy science.”3

RIEGELMAN, Amy; BAKKER, Caitlin. Understanding the complexities of retractions: Recommended resources.College & Research Libraries News, [S.l.], v. 79, n. 1, p. 38. ISSN 2150-6698. Available at: <>. doi:
Publisher (Open Access):

The Rush to Publication: An Editorial and Scientific Mistake – JAMA Editorial (Howard Bauchner | September 2017)0

Posted by Admin in on February 3, 2018

The world moves at a far faster pace than even a decade ago. Instantaneous access to electronic communication via email and social media is available 24 hours a day, virtually anywhere in the world, on the ground and in the air, with video and audio on demand. Thus, no one ever needs to be—or ever is—disconnected from the world.

The speed of communication has clearly affected clinical and laboratory research. There appears to be an increasing rush to publish, or at least to make the results of studies immediately publicly available. It is unclear if flawed science is more common than in the past, but the number of accounts of serious problems with scientific reports appears to be increasing, with more high-profile retractions and increasing numbers of retractions with replacements (major inadvertent errors with a change in the findings and conclusions).1 However, because more research is being published, it is difficult to obtain precise numerator (retractions) and denominator data (all research conducted, published and unpublished).2

Nonetheless, concerns about the reproducibility of laboratory-based experiments3 and the need to reanalyze clinical data4 certainly suggest increasing challenges regarding the quality and transparency of research. High-visibility examples leave an impression of questionable science that is likely contributing to the public discourse over the meaning and definition of facts.

Read the rest of this editorial piece

What types of researchers are most likely to recycle text? The answers might surprise you – Retraction Watch (Alison McCook | October 2017)0

Posted by Admin in on January 27, 2018

Historians, economists, biochemists, psychologists: Who reuses their own material most often? Does the rate depend on how many authors a paper has, and how far along a researcher is in his or her career? Serge Horbach and Willem Halffman at Radboud University Nijmegen in the Netherlands tried to answer these questions by reviewing more than 900 papers published by researchers based in The Netherlands. And they were surprised by their findings, published last month by Research Policy.

Retraction Watch: How does the amount of text recycling you identified among researchers at Dutch universities (6.1%) compare to what other studies have shown among other groups of researchers?

Too much recycled text, without it being referenced as such, can be a publication ethics problem. It can also be a copyright issue for the publisher of the original work. Sometimes even the recycling of a sentence can be significant. Caution is recommended for authors: Rephrase or cite text you want to recycle because the consequence of a forced retraction can have a devastating impact on your career that lasts for decades.

Willem Halffman and Serge Horbach: Previous studies found varying degrees of text recycling, ranging from 3% to as much as 60%. Ours is, as far as we know, the biggest study on text recycling so far (N=922). Hence we think our figure of 6% is more realistic. However, the precise degree of text recycling depends very much on the threshold used. We used 10% of the text as our detection limit and then stuck to the Dutch guidelines for acceptable text recycling in our manual check. Judging the acceptability of text reuse depends on conventions that may be implicit, or vary between publication cultures of different research fields or even countries. Hence, we think our main finding is not so much the number of 6%, but the variation behind that number. We had expected to find more recycled text in biochemistry, where you could expect formulaic descriptions of highly standardised methods, but there was hardly any. Among Dutch historians we found virtually no text recycling. However, among Dutch psychologists text recycling is more elevated and among economists it is as much as one in seven publications. Text recycling also occurs more often among productive authors, in papers with fewer co-authors, and in journals that do not specify clear rules. This variation is more informative about the origins of text recycling than the overall degree.

Read the rest of this interview

Also see

Horbach SPJM (Serge) , Halffman W (2017) The extent and causes of academic text recycling or ‘self-plagiarism’. Research Policy. ISSN 0048-7333,
Publisher (open access):…

Politics Moves Fast. Peer Review Moves Slow. What’s A Political Scientist To Do? – FiveThirtyEight (Maggie Koerth-Baker | December 2017)0

Posted by Admin in on January 23, 2018

Politics has a funny way of turning arcane academic debates into something much messier. We’re living in a time when so much in the news cycle feels absurdly urgent and partisan forces are likely to pounce on any piece of empirical data they can find, either to champion it or tear it apart, depending on whether they like the result. That has major implications for many of the ways knowledge enters the public sphere — including how academics publicize their research.

A conundrum for political scientists. An excellent discussion piece that shows in politics the impact of dodgy research can have on community opinion.

That process has long been dominated by peer review, which is when academic journals put their submissions in front of a panel of researchers to vet the work before publication. But the flaws and limitations of peer review have become more apparent over the past decade or so, and researchers are increasingly publishing their work before other scientists have had a chance to critique it. That’s a shift that matters a lot to scientists, and the public stakes of the debate go way up when the research subject is the 2016 election. There’s a risk, scientists told me, that preliminary research results could end up shaping the very things that research is trying to understand.
Take, for instance, two studies that hit the press in late September. One was a survey of nonvoters in Wisconsin that seemed to show that the election could have swung President Trump’s way because of voter ID laws that kept people from the polls. The other was an analysis of junk news shared on Twitter that offered evidence of misinformation being targeted at people living in swing states in a way that implied a strategic effort. Neither had gone through peer review before receiving largely uncritical write-ups in major publications like The New York Times and The Washington Post. Both contained the sort of everyday flaws that the peer review process is designed to catch — flaws that undermined the reliability of the results.

Read the rest of this discussion piece