ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesResearch Misconduct

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Working with research integrity – guidance for research performing organisations: The Bonn PRINTEGER Statement (Resource | February 2018)0

Posted by Admin in on January 28, 2020

About the document

Research integrity is inherently linked to the quality and excellence of research and science for policy. To further this agenda, the European PRINTEGER project (Promoting Integrity as an Integral Dimension of Excellence in Research) has conducted comprehensive studies on research integrity and misconduct. [i] The research shows that there is a need for increased focus and guidance on how organisations may address such issues. In order to develop guidance that is anchored beyond the PRINTEGER project consortium, a consensus panel was established with a broader range of members representing wide practical and theoretical understandings of how to strengthen integrity in research organisations. The panel consists of members from different European countries and organisations, with diversity in terms of gender, geography, functions, seniority and disciplinary background.2 The members discussed recommendations in two rounds by email (a Delphi process) and at a final 1-day meeting during the PRINTEGER Conference on Research Integrity, in Bonn in Germany, February 7th 2018. This document presents the outcome of the consensus process.

The authors of this contribution are the signatories of the statement. While drawing on their professional backgrounds, the panel members are signatories of the statement in their private capacity. The statement represents the agreement of all members.


Research—and thus research misconduct—mostly takes place in a professional and organisational setting, and the organisations are normally held to be co-responsible for the conduct of their staff. There are therefore clear expectations (in some countries, legally mandated) for organisations to systematically work to promote responsible conduct in research, strengthen research integrity and reduce the risk of research misconduct. This document emphasises that responsibility for ethical research lies with everyone who is active in research, but especially with leaders in research performing organisations. Researchers’ morals alone cannot ensure research integrity; good conditions for exercising integrity must also be created at the level of the organisation and the research system.

Read the rest of this discussion piece

Research intelligence: how to sniff out errors and fraud – Times Higher Education (Jack Grove | January 2020)0

Posted by Admin in on January 27, 2020

A growing number of data detectives are on the hunt for sloppy science and dodgy statistics. Jack Grove examines the methods they use

These days it is not just co-authors or peer reviewers who are checking journal papers for errors: a growing number of self-appointed fraud busters are scanning scientific literature for flaws.

This unpaid and mostly anonymous endeavour has led to the retractions of hundreds of papers and even disciplinary action where wrongdoing is exposed.

So how can scholars catch errors when reviewing others’ papers, or when double-checking their own work or that of collaborators?

Read the rest of this discussion piece

Quality of reports of investigations of research integrity by academic institutions (Papers: Andrew Grey, et al | February 2019)0

Posted by Admin in on January 27, 2020

Academic institutions play important roles in protecting and preserving research integrity. Concerns have been expressed about the objectivity, adequacy and transparency of institutional investigations of potentially compromised research integrity. We assessed the reports provided to us of investigations by three academic institutions of a large body of overlapping research with potentially compromised integrity.

In 2017, we raised concerns with four academic institutions about the integrity of > 200 publications co-authored by an overlapping set of researchers. Each institution initiated an investigation. By November 2018, three had reported to us the results of their investigations, but only one report was publicly available. Two investigators independently assessed each available report using a published 26-item checklist designed to determine the quality and adequacy of institutional investigations of research integrity. Each assessor recorded additional comments ad hoc.

Concerns raised with the institutions were overlapping, wide-ranging and included those which were both general and publication-specific. The number of potentially affected publications at individual institutions ranged from 34 to 200. The duration of investigation by the three institutions which provided reports was 8–17 months. These investigations covered 14%, 15% and 77%, respectively, of potentially affected publications. Between-assessor agreement using the quality checklist was 0.68, 0.72 and 0.65 for each report. Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.

Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.


Research Integrity, Institution, Misconduct, Investigation

Grey, A., Bolland, M., Gamble, Avenell, A. (2019) Quality of reports of investigations of research integrity by academic institutions. Research Integrity and Peer Review 4(3).
Publisher (Open Access):

Meta-analysis study indicates we publish more positive results – ARS Technica (John Timmer | December 2019)0

Posted by Admin in on January 13, 2020

Meta-analyses will only produce more reliable results if the studies are good.

While science as a whole has produced remarkably reliable answers to a lot of questions, it does so despite the fact that any individual study may not be reliable. Issues like small errors on the part of researchers, unidentified problems with materials or equipment, or the tendency to publish positive answers can alter the results of a single paper. But collectively, through multiple studies, science as a whole inches towards an understanding of the underlying reality.

Similar findings have been found before, but it’s important to rearticulate the value of negative results to science and practice.  This speaks to poor research culture and training. University education, and even high and primary school, do not acknowledge that failure is part of discovery. The rewards for ‘success’ are high and it is very tempting for students that can lead to research misconduct.

A meta-analysis is a way to formalize that process. It takes the results of multiple studies and combines them, increasing the statistical power of the analysis. This may cause exciting results seen in a few small studies to vanish into statistical noise, or it can tease out a weak effect that’s completely lost in more limited studies.

But a meta-analysis only works its magic if the underlying data is solid. And a new study that looks at multiple meta-analyses (a meta-meta-analysis?) suggests that one of those factors—our tendency to publish results that support hypotheses—is making the underlying data less solid than we like.

Publication bias

It’s possible for publication bias to be a form of research misconduct. If a researcher is convinced of their hypothesis, they might actively avoid publishing any results that would undercut their own ideas. But there’s plenty of other ways for publication bias to set in. Researchers who find a weak effect might hold off on publishing in the hope that further research would be more convincing. Journals also have a tendency to favor publishing positive results—one where a hypothesis is confirmed—and avoid publishing studies that don’t see any effect at all. Researchers, being aware of this, might adjust the publications they submit accordingly.


Read the rest of this discussion piece