To some extent, when researchers reflect upon those harms associated with a project, they may well limit their assessment of risk to the here and now and to identifiable individuals. In addition, for projects in the medical sciences, those risks were long understood as predominantly physical in the form of injury, infection or disability and related to direct participants (e.g. persons who received an experimental pharmacological agent). This limited vision is not particularly surprising. One of the perverse consequences of requiring researchers to reflect on whether the potential benefits of research justify risk to participants is that some researchers are dissuaded from looking too carefully for risks and therefore avoid developing strategies for minimising these risks and mitigating possible harms. Even more perversely, this reluctance can trigger in human research ethics committees an unrealistic level of risk aversion.
It is vital that we remember that it is primarily the responsibility of researchers to identify, gauge and weigh the risk. Research ethics review bodies have the role of providing feedback to researchers to facilitate projects, not catch out researchers and chastise them for neglecting a risk. This is especially true if we do not have resource material to assist researchers with regard to this wider focus.
We need to improve our understanding of the complexity of risks, extending our vision to look beyond the physical, the immediate, the proximate, and the individual risks. At the same time, we need to review our understanding of on whom the responsibility for the identification, mitigation ad management of all of these risks should fall.
In recent decades, national human research ethics frameworks, such as the Australian National Statement on Ethical Conduct in Human Research (National Statement) (NHMRC 2007a) have augmented their original interest in physical harm with a much broader set of psychological, legal, economic and social harms. Documents such as the Australian Code for the Responsible Conduct of Research (NHMRC 2007b) cast this net wider still to include societal and environmental risks. However, the likelihood of incidence, the significance of the harm and the timing of such harms can be harder to predict, quantify and mitigate.
We are fuelling the potential for an adversarial climate (Israel et al., 2016) if we fail to provide researchers and our research ethics reviewers with guidance on how to approach such matters.
Human research ethics committees, guided by the frameworks in which they function, focus on immediate risks directly to the participants in a project. For example, the National Statement requires committees to be satisfied that “the likely benefit of the research must justify any risks or discomfort to participants.” (NHMRC, 2007, 10). Committees can feel less equipped to tackle risks that can affect participants after the active phase of a project, such as harms to the reputation and standing of a group that can come from the research output that is distributed long after data collection and perhaps years after the research ethics review.
Harm can also impact upon populations and social/professional/community groups much wider than the actual participants. For, example, research into the academic performance of children from schools in a low socio-economic area if reported insensitively by researchers or, indeed by the media, can further stigmatise the kids, and harm the reputation of the schools and teachers. Again, work on the informal income of members of marginalised communities might be used subsequently by government to target tax avoidance by the already vulnerable. Lastly, research on the attitudes of residents in coastal communities to climate change and rising sea levels can detrimentally effect the value of surrounding land. Indeed, some review processes require researchers to consider the possibility of adverse findings (both medical and non-medical in nature). Although the National Statement, (NHMRC, 2007 p.13), recognises risks of this kind, it leaves unclear whose responsibility they are.
Focussing on the rights of individuals from a Western liberal democratic perspective is unlikely to be helpful in other contexts, such as an Aboriginal and Torres Strait Islander community, in a cultural context where a Confucian approach would be more appropriate (Katyal, 2011), or even in some organisational settings where accountability is partly achieved through openness to external scrutiny in the form of research and evaluation. As a result, there have also been prompts to consider risks to identifiable third parties, groups, institutions, communities (Weijer et al., 1999). Values and Ethics and the Guidelines for Ethical Research in Australian Indigenous Studies (GERAIS) do recognise such matters might be considered by some potential participant pools on a collective basis and perhaps with an knowledge of a history of research abuse and exploitation of their communities and this attention to collective interests can be echoed in other work on research ethics and Indigenous peoples around the world (Israel, 2015).
This is perhaps one of the reasons why some minorities have produced their own research ethics guidance documents (for examples, see Hudson et al. (2010), Nordling (2017) and Islamic Council of Victoria (2017)). The value of this kind of guidan this on some for the moments that it clarifies that it is on researchers that the important responsibility lies to foresee, mitigate and manage these risks.
Another example of deleterious impacts from research that might not be immediately obvious to researchers, research ethics reviewers or research office staff arises in the category of ‘dual use’ research (Miller and Selgelid, 2007). This where a technique, technology or an apparently non-military discovery can be used for military or terrorist purposes – sometimes with devastating effect. Initially, the concern of biomedical scientists, the issue has also troubled anthropologists, geographers, sociologists, political scientists and international relations experts in the face of overt or covert funding by military or intelligence agencies (Israel, 2015). One of the growing challenges for a significant proportion of such work (e.g. quantum computing, computer security/intrusion/hacking, smart materials, computer vision and energy storage) is the work will not typically require research ethics or any other form of independent review. The existing model of human research ethics review is initially attractive as a response, but some reflection will quickly show that ethics committees are not likely to possess the expertise/information to identify the dual use and the work may be occurring in disciplines that have not built their capacity to think through the ethics of working with human participants.
Australia has a strengthened export control framework with regard to security classification, Defence Department permits/approvals and other requirements (e.g. data security). Many Australian universities have established dedicated teams and processes for this particular area of concern. It remains an area of community concern (see Hamilton and Joske, 2017). Such controls involve balancing academic freedom, a commitment to open science and the value of scientific discovery against (inter)national security, trade and diplomatic interests. Such a balancing exercise is plainly beyond the capacity required for human research ethics review, so that the responsibility needs to rely on another mechanism.
The implications of all of this are not trivial. This all requires a change in thinking for researchers, institutions, funding bodies, learned academies and regulators. Our attention to the potential harms from a project needs to encapsulate research outputs, impacts upon communities, persons who were not direct participants in the project as well as national interests. At the same time, the consideration of a project vis-à-vis the ethical principle of research merit needs to include broader societal benefits and contributions to knowledge that might also involve a much wider group and a longer timeframe than the ones to which we are accustomed. However, in order to reach a more sophisticated analysis of the balance between potential harms and benefits, we need to more clearly allocate responsibility for such risks and devise mechanisms that reassure the community that these responsibilities have been fulfilled.
In our view, merely widening the scope of the responsibilities of human research ethics committees to address all these risks could not only exacerbate the propensity for risk aversion, but could also distort their important focus on the welfare of research participants. The current review system needs to find ways of working constructively with other processes that build the capacity of researchers and their institutions to work with these broader risks and benefits.
Institutions must have resource materials for researchers and research ethics reviewers that have the primary objective of resourcing reflective practice and building expertise in risk assessment and mitigation. Researchers must recognise these matters as their primary responsibility and research ethics reviewers must focus upon facilitation not enforcing compliance. We have written about how institutions can implement such an approach (Israel and Allen, in press).
In short, we cannot afford to ignore these challenges. Instead, we should take innovation seriously and seek constructive solutions.
Allen, G. and Israel, M. (in press, 2018) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R. and Tolich, M. (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage.
Hamilton, C. and Joske, A. (2017) Australian taxes may help finance Chinese military capability. The Australian. http://www.theaustralian.com.au/news/inquirer/australian-taxes-may-help-finance-chinese-military-capability/news-story/6aa9780c6a907b24993d006ef25f9654 [accessed 31 December 2017).
Hudson, M., Milne, M., Reynolds, P., Russell, K. and Smith B. (2010) Te Ara Tika. Guidelines for Māori Research Ethics: A Framework for Researchers and Ethics Committee Members. http://www.hrc.govt.nz/sites/default/files/Te%20Ara%20Tika%20Guidelines%20for%20Maori%20Research%20Ethics.pdf (accessed 29 December 2017).
Islamic Council of Victoria (2017) ICV Guidelines for Muslim Community-University Research Partnerships. http://www.icv.org.au/new/wp-content/uploads/2017/09/ICV-Community-University-Partnership-Guidelines-Sept-2017.pdf (accessed 29 December 2017)
Israel, M. (2015) Research Ethics and Integrity for Social Scientists: Beyond Regulatory Compliance. London: Sage.
Israel, M., Allen, G. and Thomson, C. (2016) Australian Research Ethics Governance: Plotting the Demise of the Adversarial Culture. In van den Hoonaard, W. and Hamilton, A. (eds) The Ethics Rupture: Exploring Alternatives to Formal Research-Ethics Review. Toronto: University of Toronto Press. pp 285-316. http://www.utppublishing.com/The-Ethics-Rupture-Exploring-Alternatives-to-Formal-Research-Ethics-Review.html
Katyal, K.R. (2011) Gate-keeping and the ambiguities in the nature of ‘informed consent’ in Confucian societies. International Journal of Research & Method in Education 34(2): 147-159.
Miller, S. and Selgelid, M. (2007) Ethical and philosophical consideration of the dual use dilemma in the biological sciences. Science and Engineering Ethics 13: 523-580.
NHMRC (2007a) National Statement on Ethical Conduct in Human Research. http://www.nhmrc.gov.au/guidelines-publications/e72 (accessed 29 December 2017).
NHMRC (2007b) Australian Code for the Responsible Conduct of Research. http://www.nhmrc.gov.au/guidelines-publications/r39 (accessed 29 December 2017).
Nordling, L. (2017) San people of Africa draft code of ethics for researchers. Science, March 17. http://www.sciencemag.org/news/2017/03/san-people-africa-draft-code-ethics-researchers (accessed 29 December 2017).
Weijer, C., Goldsand, G. and Emanuel, E.J. (1999) Protecting communities in research: Current guidelines and limits of extrapolation. Nature Genetics 23: 275-280.
This post may be cited as:
Allen G. and Israel M. (2018, 1 February 2018) What’s at risk? Who’s responsible? Moving beyond the physical, the immediate, the proximate, and the individual. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/whats-risk-whos-responsible-moving-beyond-physical-immediate-proximate-individual