Health services are often operated by people who strive to improve the way they deliver care. In the public imagination improvements arise from ‘breakthroughs’ such as the discovery of new disease mechanisms and drugs or devices to address these. However, it is not just novel treatments that lead to better outcomes. Sadly, it is not widely recognised that eliminating sub-optimal practices or variations in healthcare practices play a major role in improving clinical outcomes. Indeed, I don’t recall a headline announcing an increase in operational efficiency in any health service as this is hardly exciting news regardless of its value. Funders of healthcare are interested though, and in a report entitled Exploring HealthCare Variation in Australia: Analyses resulting from an OECD Study, published by the Australian Commission on Safety and Quality in Health Care in 2014, the authors stated that:
Unwarranted variation may also mean that scarce health resources are not being put to best use. As countries face increasing pressure on health budgets, there is growing interest in reducing unwarranted variation in order to improve equity of access to appropriate services, the health outcomes of populations, and the value derived from investment in health care.
All consumers of health care should therefore be interested in this and support those working toward improving health services. Unfortunately doing this work is difficult and often unrewarding. The ethical imperative to do this work is also often thwarted by the ‘ethics’ and governance framework that too often encumbers those doing it {Clay-Williams, 2018 #516}. It is also largely left to the NHMRC to fund Health Services Research (HSR) and the Comparative Effectiveness Research (CER) studies that generate evidence to reduce wasteful practice. In contrast, very little funding from health services themselves go to these activities despite them being the direct beneficiaries of the research.[1] Importantly, those engaged in HSR and CER are becoming an increasingly larger proportion of the total medical research endeavour in Australia, and by classification constituted almost one third of NHMRC competitive funding in 2019[2]. This is despite the fact that the studies undertaken often take several years to complete and therefore the number of publications is smaller than for life sciences. For HSR, publication is rarely in the ‘higher impact’ journals, whereas for some CER Trials the outcomes are so profound that they are of international significance and will be published in widely read international journals. Pleasingly this suggests that the criteria for assessment do not necessarily disadvantage such research in terms of competitiveness for funding, but also reflects the fact that clinical trial funding from the NHMRC supports a great deal of CER studies.
Those doing HSR and CER are also often involved in working in health services as clinicians which reduces the amount of time they can devote to academic research and to build a competitive personal research portfolio. The NHMRC has implemented a “Relative to opportunity” process in an attempt to address the almost impossible task of taking into account personal circumstances, but I doubt anyone is truly comfortable in applying it across the breadth of candidates and disciplines. Indeed, it could be argued that it is a surprisingly unscientific and subjective approach to use in schemes that are rewarding the quality of scientific approaches to address major societal issues. In 2019, only 7% of investigator grants went to applicants identified as HSR researchers.
It is difficult to think of what could replace this system across all areas of scientific endeavour but there is a possibility of rethinking how we fund and manage HSR and CER clinical trials. In both types of activity the end points are focussed on providing evidence to inform changes to clinical practice and health service delivery. As such the end users are health care providers and their funders. It would therefore seem much more appropriate that the end users play a much greater role in judging what kind of research should be done as well as the value of the outcomes of existing projects. The problem with this is that Health Services do not have the internal infrastructure and capability to manage research and have no incentive or means to do so.
What is particularly important to reflect on here is that publication metrics and university-based career milestones are largely irrelevant to the health services and arguably should not be the drivers of why the work is done. It would be more appropriate to have a regular employment relationship between the health services researchers and the health services in a manner that does not differ to clinical safety and quality activities. Sadly, health services have not seen the need to invest in this out of their operating budget and one can see why they would not if universities will do so. The problem though is that health service managers inevitably regard them as academic exercises with no direct relevance to routine health practice and when budgets are tight any support rapidly evaporates.
Like other industries that are reliant on R&D, it could be argued that a defined proportion of all health funding should go to HSR and CER that is conducted and run within the health services themselves. In the UK the National Institute for Health Research (NIHR) was established in 2016 with £1 billion to do just this, representing just under 1% of the total National Health Service Budget at the time (£126 billion[3]). The current public expenditure on health in Australia is $81.8 billion AUD, however, the total health spend, including private and personal (out of pocket) expenditure was $185 billion in 2017/2018[4]. The combined NHMRC and MRFF expenditure on all HSR and clinical research is estimated to be around $800million in 2019, less than 0.5% of the total health expenditure. The difference with the UK is that the Australian funding is largely administered through universities and not by the health services as in the UK.
I would propose that researchers engaged in HSR and CER should be employed by the health services themselves and be regarded as intrinsic to the operations of the health service. In addition, I propose that these staff would not publish papers under their own names but instead publish under the health service banners, either singly or as collaborations of organisations. In this way individual career progression would be based upon demonstration of outcomes in the same way that other activities in the health service are evaluated. Staff who do their job well would continue in their employment and be eligible for ongoing employment. Career progression would be through building a demonstrated portfolio of achievement that is attested to by their employers in a similar way that professional references are provided. Success for individuals would therefore be entirely based on being able to show how they had contributed to productive activity within the organisation.
If this major change to operating this type of research was made then it might also change one of the other major barriers that currently exists, the disproportionately burdensome ethical and governance requirements for such ‘research’ which is mostly treated as having the same risk profile as novel interventional studies. The aims of research are stated to differ from those of providing clinical care to patients, and this is at the heart of the Declaration of Helsinki’s ethical principles as well as those known as The Belmont Report published in 1979. In the Australian National Statement on Ethical Conduct in Human Research, such a distinction between clinical care and research is no longer identified and not commented on so explicitly.
For HSR and CER the distinction between what is routine practice and what is research is extremely blurred, particularly within the context of a self-improving/self-learning healthcare system in which a constant cycle of analysis of the current status of clinical activity informs the delivery of healthcare into the future. The robust methodologies employed to do the analysis and to test potential alternative practices aimed at improving care are indistinguishable to those used for researching novel and potentially more risky interventions. However, the risk profile is completely different, particularly where the research involves one or more practices already in widespread use. A more embedded framework that ensures that ethical issues are addressed at a systemic level rather than through the existing ethics and governance system that treats such work as ‘other’ is needed. The 2019 draft Clinical Trial Governance Framework developed by the ACSQHC goes some way toward creating a culture in which this can be delivered although it will likely require significant cultural change at most health services engaged in research.
We need a system that values those doing this work as core employees and is directly vested in the outcomes of the work and their implementation into improved practice as the prime demonstration of productivity. Such a cultural change will provide the drive to streamline the overly burdensome regulatory framework that currently exists. Such a framework would deliver its own efficiency dividends in a positive cycle enabling more of this work to be done and an acceleration in avoiding wasteful practice and generation of data that brings real improvements to people’s lives. It would seem that this is the very definition of an ethical outcome.
[1] ACSQHC/ACTA report on clinical trial benefits https://www.safetyandquality.gov.au/publications-and-resources/resource-library/economic-evaluation-investigator-initiated-clinical-trials-conducted-networks-final-report
[2] https://www.nhmrc.gov.au/file/14808/download?token=GAkwLHj0
[3] https://fullfact.org/health/spending-english-nhs/
[4] https://www.aihw.gov.au/reports/health-welfare-expenditure/health-expenditure-australia-2017-18/contents/summary
This post may be cited as:
Zeps, N. (04 June 2020) HWhen Research is the treatment: why the research/clinical care divide doesn’t always work Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/when-research-is-the-treatment-why-the-research-clinical-care-divide-doesnt-always-work