ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Research Ethics MonthlyAbout Us

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Is human research ethics review a form of out of date, inefficient and ineffective regulation?

Posted by Admin in Human Research Ethics on September 22, 2015 / 3 Comments / Keywords: , ,

As I reached page 35 of the latest NEAF application for the next HREC meeting, I wondered, with some dismay, whether the system we are using is a form of regulation that has been rejected some decades ago in most other contexts. (I was dismayed because I had chaired the NHMRC working party that developed the NEAF!)

On 6 July 1988, the Piper Alpha North Sea oilrig was destroyed by a fire that killed 167 workers. This was in spite of a regulatory safety regime designed to prevent such disasters. The report of the inquiry into the disaster recommended significant changes in that regulatory scheme. Central was the proposal to no longer regulate by the imposition of prescriptive requirements but instead to require operators to submit a “safety case” to the regulating authority that showed how the operator would meet the safety requirements by reducing risks to as low as reasonably practical (ALARP) levels. No longer was compliance to be measured by pedantic and pernickety checklists related to prescriptive requirements because these had proved both inefficient and ineffective. Instead, compliance was measured by reliance on the operator’s innovation in meeting standards and commitment to safety.

Is this what we need to do in research ethics review? Are we at present, by relying on instruments like the NEAF (and others), burdening every applicant indiscriminately with prescriptive requirements of how to tell the reviewers what the researchers are planning to do? Granted, the requirements are all derived from the National Statement but is a focus on meeting those the only or the most important criterion for a good application?

We have little evidence that ethics review changes outcomes, although now there are only a few reported examples of unethical or scandalous research conduct, but perhaps this was always the case. The impact of these far outweighs their low frequency, but the fact that few do occur could be argued to be some evidence of ineffectiveness (it did not prevent these) or effectiveness (it only allowed these few) of ethical review. But there are other important effects. In recent years, there has been much discussion about the adversarial atmosphere in which human research ethics review is often conducted. Is this evidence of inefficiency and, if so, is inefficiency linked to ineffectiveness? In other contexts, such as oil exploration, extensive time given to filling out pedantic checklists, ever expanding work hours for teams of costly inspectors bred superficial compliance, corner cutting and workarounds. Can such resistance to, or rejection of the process because of its cost and burden lead to its ineffectiveness and, arguably, to disasters like Piper Alpha? Similarly, perhaps, with research ethics review: the time burdens of the application and review process that are seen to be unnecessary breed irritation, frustration, disdain and resistance which, in turn, mean ineffectiveness.

In oilrig safety, the renewed recognition was that measures of effectiveness need to include effects on operators’ attitudes. Greater safety could be demonstrated not only by a reduction of accidents but also by a change in the attitudes of operators. Their motivation for a safe operation was not from external compulsion – this is what I’ve been told to do – but from an internal conviction – this is the best way to conduct this operation.

So what could we do in human research ethics review? The model of responsive and smart regulation shows that the keys are:

* Giving more scope for regulatees to tell their story: to describe how they will meet and how they have met the relevant standards

* Relying more heavily on the motivation engendered in regulatees by forces other than regulation: public opinion, the market, industry, international standards and a sense of public responsibility

* Changing the function of the regulator to one of steering not rowing.

In human research ethics review, the application process could be changed to give far more scope for researchers to describe how they will meet a set of criteria derived from the National statement.

More radically, could we take the finality out of the initial review by instigating a system in which:

* The outcome of the first review is advice by reviewers, agreed to by researchers, on how to conduct the project,

* Progress reports that show that the advice is being followed • Agreements to adapt the advice when needed to respond to unforeseen or foreseen events,

* A completion report that demonstrates conformity with the agreed advice and that generates, in turn, approval from the reviewer, for publication and related purposes.

Currently, reviews of national human research guidelines are afoot in the USA and Australia. Now may be an opportune time to re-imagine a system of human research ethics review.

Colin Thomson
Professor, Academic Leader Health
Law and Ethics, Graduate School of Medicine, University of Wollongong,
Director, Houston Thomson Pt Ltd.
Colin’s AHEC profile


Healy, J., Braithwaite, J. Designing safer health care through responsive regulation MJA; May 15, 2006; 184-187: Baldwin, R and Black, J. Really Responsive Regulation MLR (2008) 71(1) 59-94

Gunningham, N. Environment Law, Regulation and Governance: Shifting Architectures, Journal of Environmental Law 21:2 (2009), 179 -212; Gunningham, N, Grabovsky, P and Sinclair, D. 1998. Smart Regulation: Designing Environmental Policy. Oxford: Oxford Univ. Press.

Sinclair, D. 1998. Smart Regulation: Designing Environmental Policy. Oxford: Oxford Univ. Press.

This blog may be cited as
Thomson, C (2015, 23 September) Is human research ethics review a form of out of date, inefficient and ineffective regulation?. AHRECS Blog. Retrieved from

While I am sympathetic to the notion of change and improvement – and certainly dealing with investigator complaints of inefficiencies – I am yet to so productive change in ways in which proposed changes to ethics regulation can be evaluated. Almost every proposed change – be it accreditation, harmonisation intiatives or other steps – are made without suggestions regarding how the benefits of these changes will be shown. How can we develop or improve systems if we are unable to demonstrate this? We end up resorting to criticisms and propositions, but no evidence to back up any approach.

If we are to make changes to regulatory systems, this needs to go hand in hand with work to identify ways in which improvements in the system can be shown (or ways in which the system is made worse). Even when there is disagreement over what might be meant by ‘quality’ in ethics review processes it should be possible to collect routine metrics or other forms of information that could show changes over time or between locations.

As a first step we recently conducted a scoping review to see what empirical work has been done to evaluate ethics review procedures ( which found a vast array of metrics considered. What is needed, at least as a preliminary step, is agreement on what might be useful information to collect so that we can begin a process of evidence-informed decision-making regarding appropriate regulatory mechanisms.

Bento S says:

Hoorah! The most interesting bit is the proposal toward the end to use an end-project approval model rather than a start-project model, although I note that this may present during-the-project authority and enforcement issues, which, as a lawyer, I would have expected Prof Thomson to consider. That is, if it’s just advice (even formally agreed), rather than approval, then researchers may feel less bound by it or it may be harder to take disciplinary action when necessary (e.g. one can withdraw approval, but if there is no approval, then what does one withdraw??).

On the other hand, the whole argument is that compliance and enforcement may be the wrong things to focus on if the goal is a different kind of system for conduct and oversight of research.

On the issue of the application form, rumours are that the revision to the NEAF is embedded in a philosophy of narrative responses (what are the ethical issues and tell us how you will ensure that they are addressed) rather than a tick-box mentality, which rumours we can only hope are correct.

great post. Having just finished a low-risk hospital project in 52 countries, it has been ‘interesting’ to see the different regulations worldwide. Without a doubt the Australian system was the most admin heavy and inefficient, rivalled only by the UK in our experience’.

Leave a Reply

Your email address will not be published. Required fields are marked *


Please enter the CAPTCHA text