ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesPeer review

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Why all randomised controlled trials produce biased results (Papers: Alexander Krauss | March 2018)0

Posted by Admin in on June 7, 2018
 

Abstract

Background: Randomised controlled trials (RCTs) are commonly viewed as the best research method to inform public health and social policy. Usually they are thought of as providing the most rigorous evidence of a treatment’s effectiveness without strong assumptions, biases and limitations.

Objective: This is the first study to examine that hypothesis by assessing the 10 most cited RCT studies worldwide.

Data sources: These 10 RCT studies with the highest number of citations in any journal (up to June 2016) were identified by searching Scopus (the largest database of peer-reviewed journals).

Results: This study shows that these world-leading RCTs that have influenced policy produce biased results by illustrating that participants’ background traits that affect outcomes are often poorly distributed between trial groups, that the trials often neglect alternative factors contributing to their main reported outcome and, among many other issues, that the trials are often only partially blinded or unblinded. The study here also identifies a number of novel and important assumptions, biases and limitations not yet thoroughly discussed in existing studies that arise when designing, implementing and analysing trials.

Conclusions: Researchers and policymakers need to become better aware of the broader set of assumptions, biases and limitations in trials. Journals need to also begin requiring researchers to outline them in their studies. We need to furthermore better use RCTs together with other research methods.

Key messages

  • RCTs face a range of strong assumptions, biases and limitations that have not yet all been thoroughly discussed in the literature.
  • This study assesses the 10 most cited RCTs worldwide and shows that trials inevitably produce bias.
  • Trials involve complex processes – from randomising, blinding and controlling, to implementing treatments, monitoring participants etc. – that require many decisions and steps at different levels that bring their own assumptions and degree of bias to results.

Keywords: Randomised controlled trial, RCT, reproducibility crisis, replication crisis, bias, statistical bias, evidence-based medicine, evidence-based practice, reproducibility of results, clinical medicine, research design

Krauss, A. (2018) Why all randomised controlled trials produce biased results. Annals of Medicine. 50:4, 312-322, DOI: 10.1080/07853890.2018.1453233
Publisher (Open Access): https://www.tandfonline.com/doi/full/10.1080/07853890.2018.1453233

Australian Academy of Science – Code of Conduct0

Posted by Admin in on June 3, 2018
 

“This Code of Conduct and associated implementation plan, guidelines, policy and procedures have been developed to provide context and guidance to Academy Fellows, employees and others representing or otherwise involved with the Academy in its efforts to achieve its mission.

It’s great to see an Australian code of conduct specifically refer to bullying and harassment.

The document covers the Academy’s values, expectations and requirements regarding conduct, the policy principles on which the code and its implementation are based, and guidelines and procedures for responding to breaches of the code.
.
The Academy does not tolerate bullying and harassment and has a commitment to investigating and where warranted acting on reported or alleged instances of bullying and harassment in a prompt and decisive manner…”
.

Access the Code of Conduct and related materials

Disgraced surgeon is still publishing on stem cell therapies – Science (Matt Warren | April 2018)0

Posted by Admin in on May 19, 2018
 

Paolo Macchiarini, an Italian surgeon, has been fired from two institutions and faces the retraction of many of his papers after findings of scientific misconduct and ethical lapses in his research—yet this hasn’t prevented him from publishing again in a peer-reviewed journal. Despite his circumstances, Macchiarini appears as senior author on a paper published last month investigating the viability of artificial esophagi “seeded” with stem cells, work that appears strikingly similar to the plastic trachea transplants that ultimately left most of his patients dead. The journal’s editor says he was unaware of Macchiarini’s history before publishing the study.

“I’m really surprised,” says cardiothoracic surgeon Karl-Henrik Grinnemo, one of the whistle-blowers who exposed Macchiarini’s misconduct at the Karolinska Institute (KI) in Stockholm. “I can’t understand how a serious editorial board can accept manuscripts from this guy.”

Macchiarini was once heralded as a pioneer of regenerative medicine because of his experimental transplants of artificial tracheas that supposedly developed into functional organs when seeded with a patient’s stem cells. But his career came crashing down after the Swedish documentary Experimenten showed the poor outcomes of his patients, all but one of whom have now died. (The lone survivor was able to have his implant removed.) Macchiarini was subsequently fired from KI, both the university and a national ethics board found him guilty of scientific misconduct in several papers, and Swedish authorities are now considering whether to reopen a criminal case against him.

Read the rest of this discussion piece

Science isn’t broken, but we can do better: here’s how – The Conversation (Alan Finkel | April 2018)0

Posted by Admin in on May 16, 2018
 

Every time a scandal breaks in one of the thousands of places where research is conducted across the world, we see headlines to the effect that “science is broken”.

But if it’s “broken” today, then when do we suggest it was better?

Point me to the period in human history where we had more brilliant people or better technologies for doing science than we do today. Explain to me how something “broken” so spectacularly delivers the goods. Convince me I ought to downplay the stunning achievement of – say – the detection of gravitational waves.

Read the rest of this discussion piece

0