A rose by any other name….?

As both a researcher and a research administrator in healthcare, one of the more vexing issues that I have to deal with on an almost daily basis is how to manage what are termed quality assurance, quality improvement and audit activities. In its 2014 publication entitled “Ethical Considerations in Quality Assurance and Evaluation Activities”, the NHMRC (NHMRC QA guidance) suggests that these can be loosely gathered together under an umbrella term of Quality Assurance (QA) and/or evaluation. I believe this construct is wrong and reinforces a longstanding approach to ethics review that relies on the category of an investigative activity to determine the level of review that is used. This approach is problematic and leads to some significant unintended consequences.

Most institutions appear to have made their own interpretations of the content and intention of the NHMRC QA guidance and still spend time defining whether an activity is research or QA/QI so as to be able to push it down one review pathway or another. Added to this is the frequently repeated canard that, if one wishes to publish a QA activity, then one requires ethics approval. The most common justification for this assertion is that journal editors demand it, creating circumstances in which low or negligible risk activities end up being screened by HREC offices and/or reviewed by HRECs despite the fact that the National Statement clearly indicates that this is not necessary (Section 5.1.17-5.1.21).

How did we get here? Having served on the Australian Health Ethics Committee (AHEC) from 2006-2012, during which I was involved in developing what was eventually published as the NHMRC QA guidance, I have something of an insider perspective. Whilst, with my colleagues, I was able to successfully advocate for the line “Irrespective of whether an activity is QA, evaluation or research, the activity must be conducted in a way that is ethical.”, I believe that we fundamentally failed to persuade our colleagues, or the country at large, that there is a better, more proportionate, way to fulfil our responsibilities for oversight of these activities; specifically, a model that is more effective than simply categorising them as research, evaluation, QA or QI.

In this article, I present a potentially controversial argument that not only is there taxonomic ambiguity in distinctions between research and ‘non-research’ activity, such as QA and QI, but that quality assurance and quality improvement are themselves not the same. I argue that quality improvement is philosophically indistinguishable from what we recognise as research – or, said another way, research is a form of quality improvement. In contrast, activities that simply wish to benchmark an activity or outcome against a standard for internal purposes are not quality improvement – they are, more accurately, quality assurance.

There is a set of activities, such as audits, that are conducted as part of quality assurance or, in commercial terms, for the purpose of ‘quality control’. In the health care setting, these activities seek to collect data that may identify variations in care. The findings of these audits may then provide the incentive to seek out ways to provide better care, but they do not seek to identify, assess or implement improvements in delivery of care and they are certainly not research. More correctly, they should be understood as part of the routine business of a health care organisation and, by extension, the use of any personal data for these activities is closely related to the primary purpose for collecting the data in the first place – which is to provide best, or standard, care. Indeed, one could and should argue that collecting data to enable benchmarking and quality assurance is the primary purpose itself.

In contrast, subsequent attempts to introduce and/or assess improvements based on the findings of an audit: from finding ways to improve compliance with hand hygiene through to reducing adverse events arising from the provision of a medication to patients are all attempts to improve the quality of the service provided. These efforts may also employ methods that are recognised as being indistinguishable from research methodology. If this leads to the activity being called research, then logic demands that we should regard research as a form of quality improvement.

This argument fundamentally challenges two aspects of the ‘standard’ model used in governance of investigative activity that takes place in a health care environment: first, it questions the legitimacy of the distinction between research and non-research activity and, second, it questions the prevailing idea that quality assurance and quality improvement are synonymous. I argue that these two errors, together, have utterly confused health care practitioners, researchers and reviewers and forced them to make arbitrary and illogical distinctions where there are none and ignore real distinctions where they do exist.

To review, my argument is that there is activity that is all about internal benchmarking against a standard, which is a form of ‘quality control’ and activity that sets out to improve the quality of health care services. The first activity is quality assurance and the second is quality improvement. Furthermore, I argue that, in the health sector, not only is quality improvement that employs research-type methods ‘research’, but research in this context is, itself, a form of quality improvement.

Perhaps you will agree with this argument and perhaps not, but, you might ask, why does it matter? It matters, because we have created a three-tier system for providing oversight of these activities: (1) institutional oversight of ‘QA/QI’, (2) ethics review of low-risk research and (3) HREC review of greater than low risk research. This system is fundamentally misguided. What we should have is a system that calibrates oversight to risk. If an activity, whatever it is labelled, has no risk and is within the reasonable expectations of those who participate in it or whose data and information is used in it, then we should apply a low level of scrutiny[1] to it and greater levels of scrutiny should be provided as the risk profile of the activity increases. There is a phrase that captures this idea perfectly: proportionality, or, ‘proportional review’.

The failure to create a proportionate approach to managing these activities is no surprise when one considers the increasing scope of what is now deemed to require ethics review. The phenomenon of ‘ethics creep’ 1, where simple surveys are determined by a university or hospital HREC to require ethics approval, has distorted one of the original purposes of ethics review – which is to reduce risk. In stark contrast, surveys sent out by commercial companies to anyone who shops online or uses just about any service do not require review. David Asch and his colleagues brilliantly point out the absurdity of this inconsistency in a New England Journal of Medicine opinion piece entitled “Misdirections in informed consent – impediments to health care innovation” 2. In this thought experiment, two scenarios are presented in which a health service wishes to increase colorectal cancer screening rates. In the first scenario, the service simply sends out a birthday card to a person who turns 50 and includes coupons for free drinks and a book of crossword puzzles to read while on the toilet as a bit of a whimsey. In the second scenario, they pre-book an appointment with a hotline to facilitate a person’s ability to make changes for a more convenient appointment time. The service wants to record which pathway leads to more screening. The paper details the various scenarios in which this apparently innocuous approach could lead to a full-blown research ethics committee submissions and detailed consent documentation. Clearly, this is a quality improvement exercise aimed at improved rates of colonoscopy screening to offset the many cases of preventable cancer that are missed as a result of people’s failure to take up screening opportunities. However, to ensure that we get the best data from this exercise, it makes sense to use the strongest methods and these may include randomisation. For some, this converts the activity from ‘QI’ to research.

It is clear, at least to me, that this activity, whatever it is labelled, is minimal risk and should be managed through a low risk pathway for ethics review. Furthermore, the findings are likely to be relevant not only to the health service, but also more broadly, so that other services can choose to adopt the more effective approach based on demonstrated utility. If so, then the activity might also commonly be categorised as research. Sadly, there is plenty of evidence that such research is being treated as more than low risk even though doing so retards and discourages busy clinicians and health services from doing this type of research.

However, there is some light at the end of the tunnel. The recent proposed changes to the levels of review (accompanied by the need for re-definition of levels of risk) and the expanded criteria for eligibility for exemption from review in the National Statement provide a pivot point for institutions to review and reform their approaches to managing these activities. Under the proposed section 5.16 it is possible that the project described by Asch et al might be considered by an institution to be exempt from ethics review by an HREC entirely under section e:

5.16. Research that may be eligible for exemption includes research that:

  1. is conducted by or on behalf of a government department or agency, using data collected or generated by the government for non-research purposes, and the use of the information adheres to relevant privacy standards. This may include research that is designed to observe, analyse, evaluate or improve public service or public benefit programs.

In the David Asch example, government programmes to improve colonoscopic screening could be regarded as exempt. Alternatively, oversight of this programme could be managed under the revised section 5.12:

5.12. For research that carries only minimal risk (see 2.1.6), institutions may choose to establish other processes for review. These processes may include:

  1. review by a subcommittee or the Chair of an HREC;
  2. delegated review by a committee or person(s) within an institution;
  3. in a university setting, review or assessment at departmental level by the head of department;
  4. in a university setting, review or assessment by a departmental committee of peers (with or without external or independent members); and
  5. acceptance of a review process external to the institution (see Chapter 5.5).

In either scenario, whether the activity is labelled research or quality improvement is irrelevant. The key is whether the review of the activity can be aligned with the relevant guidance within the National Statement for review of research with minimal risk or other guidance for review of non-research activity.

The wonderful Richard Feynman is one of the most important and intriguing figures of the 20th century. He was renowned for skewering poor thinking and had a unique ability to arrive at simple truths and deliver these in as plain a manner as it is possible to achieve. In his book “The Pleasure of Finding things Out” he recounts the times he went for walks with his father through the woods. His father would point to a bird and say “Do you know what that bird is? It’s a brown throated thrush; but in Portuguese it’s a …in Italian a…, etc.” “Now,” he says, “you know in all the languages you want to know what the name of the bird is and when you’ve finished with all that,” he says, “you’ll know absolutely nothing whatever about the bird. You only know about humans in different places and what they call the bird. Now,” he says, “let’s look at the bird.”

We have an opportunity to implement reforms without actually requiring any changes to laws or ethics guidelines. Put simply, as Feynman’s father suggested, let’s stop trying to categorise activities as to whether they seem to be research or not but, instead, look at what it is that is actually being done and find the best way to manage any risks that it creates.

Special thanks to Jeremy Kenner for his contribution to the development and refinement of my thinking on these issues and for critical insights on this essay.

  1. Haggerty, K.D. Ethics Creep: Governing Social Science Research in the Name of Ethics. Qualitative Sociology 27, 391-414 (2004).
  2. Asch, D.A., Ziolek, T.A. & Mehta, S.J. Misdirections in Informed Consent – Impediments to Health Care Innovation. N Engl J Med 377, 1412-1414 (2017).

[1] It is the responsibility of the institution to create, maintain and monitor its mechanisms for oversight and review of all of the activities conducted under its auspices. Although, ultimately, national or, at least, intra-jurisdictional standardisation of these mechanisms is desirable, the first order of business is for each institution to get it right within the institution. One very useful first step is to ensure that HRECs never review research that is low risk or activity that is not research.

This post may be cited as:
Zeps, N. (30 November 2020) A rose by any other name….? Research Ethics Monthly. Retrieved from: https://ahrecs.com/a-rose-by-any-other-name/

Contact us