ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesResearch results

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Commission welcomes new European Code of Conduct for Research Integrity – News alert from EU Commission (March 2017)0

Posted by Admin in on April 2, 2017

The European Commission has received today the new European Code of Conduct for Research Integrity aimed at promoting the responsible conduct of research to help improve its quality and reliability.

This new Code was developed by national academies of sciences and humanities through their umbrella organisation, the All European Academies (ALLEA) federation, in close cooperation with the European Commission. Professor Günter Stock, the President of ALLEA, presented the Code to Carlos Moedas, Commissioner for Research, Science and Innovation.

Commissioner Moedas said: “The Commission’s recent White Paper on the Future of Europe shows that we need knowledge and innovation to respond to global challenges and to address the needs of people in the European Union. The public needs full trust in science, and this can only be achieved if the highest level of research ethics and integrity are guaranteed. This goes hand in hand with our Open Science agenda to ensure open access to scientific publications and data. I warmly thank ALLEA and its member academies for producing this new Code of Conduct for Research Integrity. I am sure it will serve as a model for organisations and researchers across Europe.”

Read the rest of this news release
Factsheet about the Code
PDF copy of the Code

Interventions to prevent misconduct and promote integrity in research and publication (Papers: Ana Marusic, et al | 2016)0

Posted by Admin in on March 31, 2017


Improper practices and unprofessional conduct in clinical research have been shown to waste a significant portion of healthcare funds and harm public health.

Our objective was to evaluate the effectiveness of educational or policy interventions in research integrity or responsible conduct of research on the behaviour and attitudes of researchers in health and other research areas.

Search methods:

This isn’t the first paper that ends up concluding there isn’t good data available to determine whether training is effective, but it is certainly detailed analysis of what they did collect. Also see the extra Retraction Watch link below at the end of this entry.

We searched the CENTRAL, MEDLINE, LILACS and CINAHL health research bibliographical databases, as well as the Academic Search Complete, AGRICOLA, GeoRef, PsycINFO, ERIC, SCOPUS and Web of Science databases. We performed the last search on 15 April 2015 and the search was limited to articles published between 1990 and 2014, inclusive. We also searched conference proceedings and abstracts from research integrity conferences and specialized websites. We handsearched 14 journals that regularly publish research integrity research.
Selection criteria:
We included studies that measured the effects of one or more interventions, i.e. any direct or indirect procedure that may have an impact on research integrity and responsible conduct of research in its broadest sense, where participants were any stakeholders in research and publication processes, from students to policy makers. We included randomized and non-randomized controlled trials, such as controlled before-and-after studies, with comparisons of outcomes in the intervention versus non-intervention group or before versus after the intervention. Studies without a control group were not included in the review.
Data collection and analysis:
We used the standard methodological procedures expected by Cochrane. To assess the risk of bias in non-randomized studies, we used a modified Cochrane tool, in which we used four out of six original domains (blinding, incomplete outcome data, selective outcome reporting, other sources of bias) and two additional domains (comparability of groups and confounding factors). We categorized our primary outcome into the following levels: 1) organizational change attributable to intervention, 2) behavioural change, 3) acquisition of knowledge/skills and 4) modification of attitudes/perceptions. The secondary outcome was participants’ reaction to the intervention.
Main results:
Thirty-one studies involving 9571 participants, described in 33 articles, met the inclusion criteria. All were published in English. Fifteen studies were randomized controlled trials, nine were controlled before-and-after studies, four were non-equivalent controlled studies with a historical control, one was a non-equivalent controlled study with a post-test only and two were non-equivalent controlled studies with pre- and post-test findings for the intervention group and post-test for the control group. Twenty-one studies assessed the effects of interventions related to plagiarism and 10 studies assessed interventions in research integrity/ethics. Participants included undergraduates, postgraduates and academics from a range of research disciplines and countries, and the studies assessed different types of outcomes.We judged most of the included randomized controlled trials to have a high risk of bias in at least one of the assessed domains, and in the case of non-randomized trials there were no attempts to alleviate the potential biases inherent in the non-randomized designs.We identified a range of interventions aimed at reducing research misconduct. Most interventions involved some kind of training, but methods and content varied greatly and included face-to-face and online lectures, interactive online modules, discussion groups, homework and practical exercises. Most studies did not use standardized or validated outcome measures and it was impossible to synthesize findings from studies with such diverse interventions, outcomes and participants. Overall, there is very low quality evidence that various methods of training in research integrity had some effects on participants’ attitudes to ethical issues but minimal (or short-lived) effects on their knowledge. Training about plagiarism and paraphrasing had varying effects on participants’ attitudes towards plagiarism and their confidence in avoiding it, but training that included practical exercises appeared to be more effective. Training on plagiarism had inconsistent effects on participants’ knowledge about and ability to recognize plagiarism. Active training, particularly if it involved practical exercises or use of text-matching software, generally decreased the occurrence of plagiarism although results were not consistent. The design of a journal’s author contribution form affected the truthfulness of information supplied about individuals’ contributions and the proportion of listed contributors who met authorship criteria. We identified no studies testing interventions for outcomes at the organizational level. The numbers of events and the magnitude of intervention effects were generally small, so the evidence is likely to be imprecise. No adverse effects were reported.
Authors’ conclusions:
The evidence base relating to interventions to improve research integrity is incomplete and the studies that have been done are heterogeneous, inappropriate for meta-analyses and their applicability to other settings and population is uncertain. Many studies had a high risk of bias because of the choice of study design and interventions were often inadequately reported. Even when randomized designs were used, findings were difficult to generalize. Due to the very low quality of evidence, the effects of training in responsible conduct of research on reducing research misconduct are uncertain. Low quality evidence indicates that training about plagiarism, especially if it involves practical exercises and use of text-matching software, may reduce the occurrence of plagiarism.

Marusic A, Wager E, Utrobicic A, Rothstein HR and Sambunjak D (2016) Interventions to prevent misconduct and promote integrity in research and publication. Cochrane Database of Systematic Reviews. Issue 4. Art. No.: MR000038. DOI: 10.1002/14651858.MR000038.pub2.
Research Gate:… [accessed Jan 29, 2017] Publisher:…

Also see:

Stopping the slide to research fraud – CMAJ News (Miriam Shuchman | January 2017)0

Posted by Admin in on March 29, 2017

Young researchers may feel unspoken pressure to ensure their data fit a hypothesis

During a 2016 department research retreat in Ontario, a medical school professor described cases of research fraud that had received international attention. Several students came up afterward to say they connected personally to his topic. During discussions with their research supervisors, they “felt an unspoken expectation to ensure their data fit with the hypothesis,” said the professor. “Like, if there are any outliers, get rid of it, that kind of thing.”

The professor, who declined to be named to protect the students’ identities, was surprised that he’d struck a chord. But surveys over the past several years show it’s not rare for scientists to cut corners in their work. In a 2009 meta-analysis of 18 large surveys, Daniele Fanelli of Stanford University found that up to 34% of scientists — including medical researchers — admitted “dropping data points based on a gut feeling” or other questionable research practices, and as many as 72% had seen questionable behaviour by a colleague.

Read the rest of this discussion piece

Research Ethics Timeline (1932-Present)0

Posted by Admin in on March 26, 2017

Comprehensive (albeit with a US focus) human research ethics timeline from Tuskegee (1932-72) to the Final Rule for revisions to the Common Rule (2017).

Resnik, D.B. (2017): Research Ethics Timeline (1932-Present). National Institute of Environmental Health Sciences