Unnatural justice: Public allegations could cause significant harm to vital clinical trial activity

LONGER READS

Nik Zeps 

Summary

A recent allegation by Public Citizen featured on the AHRECS latest news webpage has implied that a clinical trial was unethical by design and conduct. In this blog I have examined the nature of the allegation in the context of the type of trial that is being criticised. Whilst it is important that any research be open to scrutiny and where appropriate criticism, the way this is done is important. I outline why I believe that in this instance the allegation is problematic not only because it denies natural justice to the researchers but because it makes assertions about an entire methodology that is essential to avoid wasteful and potentially harmful medical practices. At a time when there is almost a war between science and some sectors of the community, these kinds of allegations have the potential to create very real harm.

Why do we need to do clinical trials?

Clinical trials are the cornerstone of how we determine whether a particular clinical intervention is safe and effective. Although the term “Clinical Trial’ is often applied as a collective noun to a group of activities it is better understood as a methodology. That is, a clinical research project is defined as a clinical trial if it employs one of several key methodologies that eliminate bias, such as the use of randomisation or blinding of observers and/or participants. Most people still think of clinical trials as being mostly for novel interventions such as new drugs and devices. However, some of the most important trials are comparisons of existing treatments that are already in widespread use. These so-called Comparative Effectiveness Trials (CETs) are particularly important in addressing the lack of evidence for routine care identified earlier in this piece.

Clinical trials are incredibly powerful in ensuring that what we observe experimentally is real and not due to some confounding factor. If we relied on observation alone, we may miss important non-random factors that influence the outcomes. A good example of this is whether the use of Vitamin E supplements prevents heart disease. Although there was a biologically plausible rationale for an effect of Vitamin E, and observational studies had suggested a benefit, when nearly 40,000 women were randomised between Vitamin E and Placebo in the Women’s health study over a 12 year period it was not found to have any beneficial effect.1 Similar findings were shown in a related study of men and women with existing vascular disease or diabetes, which showed a small extra risk for heart failure in the supplement group.2

One might think then that most clinical practice should be based on clinical trial evidence, and ideally, on the basis of the synthesised evidence from several studies that have examined a particular intervention through highly reproducible and comparable methodologies. Sadly, this is not the case. An estimate of the number of common medical interventions that have such ‘level 1’ evidence indicate that as few as 18% qualifies as ‘evidence based’.3 As these trials involve the use of interventions that are already in widespread use there has been some debate as whether they should be treated in the same way as studies involving untried and potentially more risky interventions. As there is no evidence for one intervention being better than the other, then a legitimate question arises about whether such trials can be considered as no more risky than routine care as they will involve exposure to treatments that a person may well have received otherwise. The only difference will be that rather than being assigned a treatment based upon the opinion of the doctor, which may well be without any sound evidence other than it being a traditional practice, the treatment will be given ‘randomly’.

Comparative Effectiveness Trials and Risk

Although CETs do not involve experimental unapproved agents there is something about treatments being given at ‘random’ that evokes a visceral response in some that equates to it being riskier. However, this can be objectively regarded as merely a prejudice when one realises that the evidence for two or more treatments in widespread use is equivocal. It can then be argued that there is indeed equipoise in terms of the actual risk for a person to be formally randomised to one or other treatment as opposed to the ‘natural’ randomisation occurring by them happening to be treated by one practitioner or another, either of whom might prefer one or other of the treatments purely based on their training or opinion. Such a situation seems to be inherently unpalatable because it implies that the doctors may not actually know what is best for the patient, but that does not make the situation untrue.

Given that such clinical trials pose no more risk than routine care, is it then possible to manage them with a lighter touch than we might for the use of interventions with unknown risk? What level of disclosure is required for a person to give their consent to participate in such a trial? In Australia, the Australian Clinical Trials Alliance (ACTA) has been exploring these issues as part of an approach to enhance the ‘embedding’ of research into routine clinical care models. A comprehensive review of the use of modified approaches to consent from the ACTA group has been published in the journal Anaesthesiology4 with an accompanying editorial.5 Using several examples from real world CETs, Symons et al outlined that in some instances it could be argued that a waiver of consent to participate in certain types of studies was ethically defensible. In a related article, Welch et al6 argued that traditional forms of consent if applied to CETs could result in the unethical situation of specifically excluding certain populations from participating in and therefore contributing important data to certain types of clinical care.

It was therefore somewhat worrying to see a recent AHRECS newsfeed where very serious allegations of misconduct were being made against the kind of trial we have been speaking about. The allegations have been made by Michael Carome, a medical doctor, through his organisation ‘Public Citizen’. In the allegation he states that the clinical trial in question was ‘reckless’ and that there were serious ethical and regulatory lapses. These are very serious allegations to make about a group of researchers with high standing in the medical and research community. The trial in question was published in 2019 in the New England Journal of Medicine7 and in 2020 in the Lancet,8 both widely recognised as the leading medical journals in the world. The allegations were submitted to the National Institutes of Health who funded the study and the Office of Human Research Protections and the Food and Drug Administration.

The trial in question examined the safety and efficacy of three commonly used intravenous anticonvulsants that are delivered in an emergency setting to people with convulsive status epilepticus that was unresponsive to treatment with a first line treatment using benzodiazepines. The researchers clearly outlined in their paper that there was limited evidence for the use of the three intravenous second line treatments and only one was FDA approved and none for use in children. The Established Status Epilepticus Treatment Trial (ESETT) was conducted under the exception from informed-consent requirements for emergency research (FDA regulation 21 CFR 50.2412 see box). A total of 57 hospital emergency departments across the United States were involved indicating that it went through numerous Institutional Review Boards (IRBs) who all had the opportunity to review the protocol and determine whether the trial was ethical. The results of the trial were published and as far as we are aware the findings have been used to inform practice, noting that the major conclusion was that all three approaches were equally safe and had similar outcomes. Importantly, the findings of the trial differed to those of observational studies that had favoured valproate. That this is not supported in a randomised trial is important as it indicates that the more favourable outcome may reflect a bias in prescribing. The trial was ceased early as it met prespecified futility rules, itself an indicator of a sound ethical design.

However, notwithstanding these findings, Dr Carome presents his opinion that there was an inherent lack of equipoise between the three arms of the trial because usually patient-specific factors are used to determine treatment and that randomisation therefore created increased risk for the participants. That is, the patient specific factors were not considered and people who were enrolled could be randomised to receive potentially sub-optimal treatment. He instead proposes that an arm(s) should have been included that reflected the prevailing practices based on the opinions of neurologists who treat this condition. This is a fundamental misunderstanding of the methodology and purpose of CETs. Tradition and opinion have been revealed time after time to be poor surrogates for a well powered randomised clinical trial to base clinical policy upon.

Thompson and Schoenfeld have recently concluded in a review of such studies that “Randomized intervention trials can be safely conducted and monitored using two treatments that lie within the range of usual-care practices if both approaches are considered prudent and good care for the target population”.9 In an earlier paper Liza Dawson and her colleagues outlined the pitfalls of establishing what is ‘usual care’ but note that allowing doctors to decide what a ‘usual care’ arm is in a trial can mean that the intervention is not compared evenly against a well-controlled standard.10 That is, a better study design is one that standardises the arms as much as possible. In short, the allegation by Dr Carome rests on his establishing that experts in the field would not agree that the three arms used were in equipoise but does not provide evidence that this is the case.

Another major allegation made is that as enrolment was without consent the trial fundamentally breached the rights of participants, notwithstanding that a person with the condition that is refractory to first line treatment can never under any circumstances give their consent in any trial examining treatments. This argument alone would rule out doing any sort of research in most of the patients treated in ED and ICU settings. Whilst Dr Carome has provided a lengthy and carefully referenced submission to the authorities and funders there is nothing in his allegation that would not have been considered already by the IRBs. Indeed, in his allegations he clearly implies that those who conducted the reviews at the 57 centres must not have done so properly or were not qualified to do so.

The right to ask questions

It is important that questions are raised about the conduct of studies, and indeed robust and independent assessment will ensure that trials are conducted safely. However, the public nature of the allegations does not permit due process for those he accuses. This is a concern as such public accusations creates the significant risk of eroding public trust in medicine when this is at its lowest ebb for many years. In a recent Feature article in Science 11 Charles Piller provides several examples of clinical trials to the one discussed here and also discusses this case. One such case is a controversial trial involving supportive care for neonates, the SUPPORT study, in which it was alleged that premature babies were at increased risk of death or blindness by being included in the study. A rebuttal of the allegations that the trial was reckless and unethical was published on the Hastings Center website back in 2013 for those who are interested. However, the camps supporting or criticising this trial are and will remain split about whether it should or should not have been conducted. Piller’s article focuses on whether a case for the treatments being studied can be regarded as ‘usual care’. In his article Piller quotes Dr Charles Natanson, a physician and expert on trial design at the NIH Clinical Center who said ““It’s just common sense. Why study two things inside of a trial that nobody does outside of the trial?”. Dr Carome is quoted as saying that the IRB system in the US is broken. This is a depressing view about our system.

Criticism to lead to reform that improves how clinical trials are done is welcome. However, there is a real risk in public debates or accusations of wrongdoing leading to the cessation of all such research rather than in reforms that will improve it. Investment in trial design and greater training and resources for both those conducting trials and those regulating them is needed. Clinical trials are not easy to conduct, and it would be far easier for doctors to simply not do them and just adopt existing practices no matter how little evidence supports them. The risk of the current debate is for these types of trials to stop altogether. That is something no one should be wishing for.

Acknowledgements

Thank you to Tanya Symons and Nicola Straiton for your inspiration, support and helpful suggestions in writing this.

References

  1. Lee IM, Cook NR, Gaziano JM, Gordon D, Ridker PM, Manson JE, Hennekens CH, Buring JE: Vitamin E in the primary prevention of cardiovascular disease and cancer: the Women’s Health Study: a randomized controlled trial. Jama 2005; 294: 56-65
  2. Lonn E, Bosch J, Yusuf S, Sheridan P, Pogue J, Arnold JM, Ross C, Arnold A, Sleight P, Probstfield J, Dagenais GR: Effects of long-term vitamin E supplementation on cardiovascular events and cancer: a randomized controlled trial. Jama 2005; 293: 1338-47
  3. Ebell MH, Sokol R, Lee A, Simons C, Early J: How good is the evidence to support primary care practice? Evidence Based Medicine 2017; 22: 88
  4. Symons TJ, Zeps N, Myles PS, Morris JM, Sessler DI: International Policy Frameworks for Consent in Minimal-risk Pragmatic Trials. Anesthesiology 2020; 132: 44-54
  5. Kharasch ED: Innovation in Clinical Research Regulation. Anesthesiology 2020; 132: 1-4
  6. Welch MJ, Lally R, Miller JE, Pittman S, Brodsky L, Caplan AL, Uhlenbrauck G, Louzao DM, Fischer JH, Wilfond B: The ethics and regulatory landscape of including vulnerable populations in pragmatic clinical trials. Clin Trials 2015; 12: 503-10
  7. Kapur J, Elm J, Chamberlain JM, Barsan W, Cloyd J, Lowenstein D, Shinnar S, Conwit R, Meinzer C, Cock H, Fountain N, Connor JT, Silbergleit R: Randomized Trial of Three Anticonvulsant Medications for Status Epilepticus. New England Journal of Medicine 2019; 381: 2103-2113
  8. Chamberlain JM, Kapur J, Shinnar S, Elm J, Holsti M, Babcock L, Rogers A, Barsan W, Cloyd J, Lowenstein D, Bleck TP, Conwit R, Meinzer C, Cock H, Fountain NB, Underwood E, Connor JT, Silbergleit R: Efficacy of levetiracetam, fosphenytoin, and valproate for established status epilepticus by age group (ESETT): a double-blind, responsive-adaptive, randomised controlled trial. Lancet 2020; 395: 1217-1224
  9. Thompson BT, Schoenfeld D: Usual care as the control group in clinical trials of nonpharmacologic interventions. Proc Am Thorac Soc 2007; 4: 577-82
  10. Dawson L, Zarin DA, Emanuel EJ, Friedman LM, Chaudhari B, Goodman SN: Considering usual medical care in clinical trial design. PLoS Med 2009; 6: e1000111
  11. Piller C: Failure to protect? Science 2021; 373: 729-733

This post may be cited as:
Zeps, N. (25 October 2021) Unnatural justice: Public allegations could cause significant harm to vital clinical trial activity. Research Ethics Monthly. Retrieved from: https://ahrecs.com/unnatural-justice-public-allegations-could-cause-significant-harm-to-vital-clinical-trial-activity/

Contact us