ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact matches only
Search into
Filter by Categories
Research ethics committees
Research integrity

Resource Library

Research Ethics MonthlyAbout Us

ResourcesAnalysis

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Why all randomised controlled trials produce biased results (Papers: Alexander Krauss | March 2018)0

Posted by Admin in on June 7, 2018
 

Abstract

Background: Randomised controlled trials (RCTs) are commonly viewed as the best research method to inform public health and social policy. Usually they are thought of as providing the most rigorous evidence of a treatment’s effectiveness without strong assumptions, biases and limitations.

Objective: This is the first study to examine that hypothesis by assessing the 10 most cited RCT studies worldwide.

Data sources: These 10 RCT studies with the highest number of citations in any journal (up to June 2016) were identified by searching Scopus (the largest database of peer-reviewed journals).

Results: This study shows that these world-leading RCTs that have influenced policy produce biased results by illustrating that participants’ background traits that affect outcomes are often poorly distributed between trial groups, that the trials often neglect alternative factors contributing to their main reported outcome and, among many other issues, that the trials are often only partially blinded or unblinded. The study here also identifies a number of novel and important assumptions, biases and limitations not yet thoroughly discussed in existing studies that arise when designing, implementing and analysing trials.

Conclusions: Researchers and policymakers need to become better aware of the broader set of assumptions, biases and limitations in trials. Journals need to also begin requiring researchers to outline them in their studies. We need to furthermore better use RCTs together with other research methods.

Key messages

  • RCTs face a range of strong assumptions, biases and limitations that have not yet all been thoroughly discussed in the literature.
  • This study assesses the 10 most cited RCTs worldwide and shows that trials inevitably produce bias.
  • Trials involve complex processes – from randomising, blinding and controlling, to implementing treatments, monitoring participants etc. – that require many decisions and steps at different levels that bring their own assumptions and degree of bias to results.

Keywords: Randomised controlled trial, RCT, reproducibility crisis, replication crisis, bias, statistical bias, evidence-based medicine, evidence-based practice, reproducibility of results, clinical medicine, research design

Krauss, A. (2018) Why all randomised controlled trials produce biased results. Annals of Medicine. 50:4, 312-322, DOI: 10.1080/07853890.2018.1453233
Publisher (Open Access): https://www.tandfonline.com/doi/full/10.1080/07853890.2018.1453233

Australian Academy of Science – Code of Conduct0

Posted by Admin in on June 3, 2018
 

“This Code of Conduct and associated implementation plan, guidelines, policy and procedures have been developed to provide context and guidance to Academy Fellows, employees and others representing or otherwise involved with the Academy in its efforts to achieve its mission.

It’s great to see an Australian code of conduct specifically refer to bullying and harassment.

The document covers the Academy’s values, expectations and requirements regarding conduct, the policy principles on which the code and its implementation are based, and guidelines and procedures for responding to breaches of the code.
.
The Academy does not tolerate bullying and harassment and has a commitment to investigating and where warranted acting on reported or alleged instances of bullying and harassment in a prompt and decisive manner…”
.

Access the Code of Conduct and related materials

Nine pitfalls of research misconduct – Science (Aaron D. Robinson | May 2018)0

Posted by Admin in on May 28, 2018
 

Academic leaders must audit departments for flaws and strengths, then tailor practices to build good behaviour, say C. K. Gunsalus and Aaron D. Robinson.

One of us (C.K.G.) teaches leadership skills and works with troubled departments. At almost every session, someone will sidle up, curious about a case study: they want to know how what happened at their university came to be known externally. Of course, it didn’t.

A recommended read for research centre directors and anyone who facilitates research integrity professional development activities. The point being, there are small missteps that can lead to awful outcomes.

From what we’ve observed as a former university administrator and consultant (C.K.G.) and as a graduate student and working professional (A.D.R.), toxic research environments share a handful of operational flaws and cognitive biases. Researchers and institutional leaders must learn how these infiltrate their teams, and tailor solutions to keep them in check.
.
People who enter research generally share several values. Honesty, openness and accountability come up again and again when C.K.G. asks researchers to list what makes a good scientist. The US National Academies of Sciences, Engineering, and Medicine says that these values give rise to responsibilities that “make the system cohere and make scientific knowledge reliable”1. Yet every aspect of science, from the framing of a research question through to publication of the manuscript, is susceptible to influences that can counter good intentions.
.

Read the rest of this discussion piece

What factors do scientists perceive as promoting or hindering scientific data reuse? – LSE Impact Blog (Renata Gonçalves Curty, et al | March 2018)0

Posted by Admin in on May 17, 2018
 

Increased calls for data sharing have formed part of many governments’ agendas to boost innovation and scientific development. Data openness for reuse also resonates with the recognised need for more transparent, reproducible science. But what are scientists’ perceptions about data reuse? Renata Gonçalves Curty, Kevin Crowston, Alison Specht, Bruce W. Grant and Elizabeth D. Dalton make use of existing survey data to analyse the attitudes and norms affecting scientists’ data reuse. Perceived efficiency, efficacy, and trustworthiness are key; as is whether scientists believe data reuse is beneficial for scientific development, or perceive certain pressures contrary to the reuse of data. Looking ahead, synthesis centres can be important for supporting data-driven interdisciplinary collaborations, and leveraging new scientific discoveries based on pre-existing data.

There can be real societal benefits from data sharing, which is among the reasons why many research funding bodies require (or at least encourage) funded researchers to share their data. But it is not without its research ethics and research integrity challenges. The idea of sharing can be a source of disquiet for some researchers. Understanding why, and supporting practice in this area would increase the amount of data that is shared. We have gathered here a list of resource items about data sharing.

“If I have seen further, it was by standing upon the shoulders of giants.” This quote, attributed to Sir Isaac Newton, expresses the cumulative and synergistic nature of the growth of science. Intellectual progress and major scientific achievements are built upon the contributions of previous thinkers and discoveries. Thus the scientific enterprise thrives upon openness and collaboration.
.
The unrestricted sharing of research outputs is increasingly seen as critical for scientific progress. The calls for data sharing in particular, aligned with investment in infrastructures for housing research data, have been part of many governments’ agendas to boost innovation and scientific development, while optimising resources. The ability of researchers to access and build upon previous knowledge has thus evolved from elementary access to final published manuscripts and research reports, to the capability of accessing different outputs produced throughout the research lifecycle, including digital data files.
.
There have been a number of promising developments in funding bodies’ policies promoting and requesting compliance with data sharing requirements to ensure preservation and access to scientific data for further reuse. In the US, the Data Observation Network for Earth (DataONE), supported by the National Science Foundation (NSF), is committed to broadening education on data-related issues (e.g. data documentation, data citation), as well as to provide standards/guidelines and sustainable cyberinfrastructure to secure openness, persistence, robustness, findability, and accessibility to environmental science data

Read the rest of this discussion piece

0