Is human research ethics review a form of out of date, inefficient and ineffective regulation?

As I reached page 35 of the latest NEAF application for the next HREC meeting, I wondered, with some dismay, whether the system we are using is a form of regulation that has been rejected some decades ago in most other contexts. (I was dismayed because I had chaired the NHMRC working party that developed the NEAF!)

On 6 July 1988, the Piper Alpha North Sea oilrig was destroyed by a fire that killed 167 workers. This was in spite of a regulatory safety regime designed to prevent such disasters. The report of the inquiry into the disaster recommended significant changes in that regulatory scheme. Central was the proposal to no longer regulate by the imposition of prescriptive requirements but instead to require operators to submit a “safety case” to the regulating authority that showed how the operator would meet the safety requirements by reducing risks to as low as reasonably practical (ALARP) levels. No longer was compliance to be measured by pedantic and pernickety checklists related to prescriptive requirements because these had proved both inefficient and ineffective. Instead, compliance was measured by reliance on the operator’s innovation in meeting standards and commitment to safety.

Is this what we need to do in research ethics review? Are we at present, by relying on instruments like the NEAF (and others), burdening every applicant indiscriminately with prescriptive requirements of how to tell the reviewers what the researchers are planning to do? Granted, the requirements are all derived from the National Statement but is a focus on meeting those the only or the most important criterion for a good application?

We have little evidence that ethics review changes outcomes, although now there are only a few reported examples of unethical or scandalous research conduct, but perhaps this was always the case. The impact of these far outweighs their low frequency, but the fact that few do occur could be argued to be some evidence of ineffectiveness (it did not prevent these) or effectiveness (it only allowed these few) of ethical review. But there are other important effects. In recent years, there has been much discussion about the adversarial atmosphere in which human research ethics review is often conducted. Is this evidence of inefficiency and, if so, is inefficiency linked to ineffectiveness? In other contexts, such as oil exploration, extensive time given to filling out pedantic checklists, ever expanding work hours for teams of costly inspectors bred superficial compliance, corner cutting and workarounds. Can such resistance to, or rejection of the process because of its cost and burden lead to its ineffectiveness and, arguably, to disasters like Piper Alpha? Similarly, perhaps, with research ethics review: the time burdens of the application and review process that are seen to be unnecessary breed irritation, frustration, disdain and resistance which, in turn, mean ineffectiveness.

In oilrig safety, the renewed recognition was that measures of effectiveness need to include effects on operators’ attitudes. Greater safety could be demonstrated not only by a reduction of accidents but also by a change in the attitudes of operators. Their motivation for a safe operation was not from external compulsion – this is what I’ve been told to do – but from an internal conviction – this is the best way to conduct this operation.

So what could we do in human research ethics review? The model of responsive and smart regulation shows that the keys are:

* Giving more scope for regulatees to tell their story: to describe how they will meet and how they have met the relevant standards

* Relying more heavily on the motivation engendered in regulatees by forces other than regulation: public opinion, the market, industry, international standards and a sense of public responsibility

* Changing the function of the regulator to one of steering not rowing.

In human research ethics review, the application process could be changed to give far more scope for researchers to describe how they will meet a set of criteria derived from the National statement.

More radically, could we take the finality out of the initial review by instigating a system in which:

* The outcome of the first review is advice by reviewers, agreed to by researchers, on how to conduct the project,

* Progress reports that show that the advice is being followed • Agreements to adapt the advice when needed to respond to unforeseen or foreseen events,

* A completion report that demonstrates conformity with the agreed advice and that generates, in turn, approval from the reviewer, for publication and related purposes.

Currently, reviews of national human research guidelines are afoot in the USA and Australia. Now may be an opportune time to re-imagine a system of human research ethics review.

Colin Thomson
Professor, Academic Leader Health
Law and Ethics, Graduate School of Medicine, University of Wollongong,
Director, Houston Thomson Pt Ltd.
Colin’s AHEC profile

References

Healy, J., Braithwaite, J. Designing safer health care through responsive regulation MJA; May 15, 2006; 184-187: Baldwin, R and Black, J. Really Responsive Regulation MLR (2008) 71(1) 59-94

Gunningham, N. Environment Law, Regulation and Governance: Shifting Architectures, Journal of Environmental Law 21:2 (2009), 179 -212; Gunningham, N, Grabovsky, P and Sinclair, D. 1998. Smart Regulation: Designing Environmental Policy. Oxford: Oxford Univ. Press.

Sinclair, D. 1998. Smart Regulation: Designing Environmental Policy. Oxford: Oxford Univ. Press.

This blog may be cited as
Thomson, C (2015, 23 September) Is human research ethics review a form of out of date, inefficient and ineffective regulation?. AHRECS Blog. Retrieved from https://ahrecs.com/human-research-ethics/is-human-research-ethics-review-a-form-of-out-of-date-inefficient-and-ineffective-regulation

Contact us