ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Building the Conversation

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

What’s at risk? Who’s responsible? Moving beyond the physical, the immediate, the proximate, and the individual0

 

Building the Conversation

This month’s addition to the Building the Conversation series reflects upon how we approach risks beyond those that are physical, harm people other than a project’s participants and harms that are not immediate.

To some extent, when researchers reflect upon those harms associated with a project, they may well limit their assessment of risk to the here and now and to identifiable individuals. In addition, for projects in the medical sciences, those risks were long understood as predominantly physical in the form of injury, infection or disability and related to direct participants (e.g. persons who received an experimental pharmacological agent). This limited vision is not particularly surprising. One of the perverse consequences of requiring researchers to reflect on whether the potential benefits of research justify risk to participants is that some researchers are dissuaded from looking too carefully for risks and therefore avoid developing strategies for minimising these risks and mitigating possible harms. Even more perversely, this reluctance can trigger in human research ethics committees an unrealistic level of risk aversion.

It is vital that we remember that it is primarily the responsibility of researchers to identify, gauge and weigh the risk. Research ethics review bodies have the role of providing feedback to researchers to facilitate projects, not catch out researchers and chastise them for neglecting a risk. This is especially true if we do not have resource material to assist researchers with regard to this wider focus.

We need to improve our understanding of the complexity of risks, extending our vision to look beyond the physical, the immediate, the proximate, and the individual risks. At the same time, we need to review our understanding of on whom the responsibility for the identification, mitigation ad management of all of these risks should fall.

In recent decades, national human research ethics frameworks, such as the Australian National Statement on Ethical Conduct in Human Research (National Statement) (NHMRC 2007a) have augmented their original interest in physical harm with a much broader set of psychological, legal, economic and social harms. Documents such as the Australian Code for the Responsible Conduct of Research (NHMRC 2007b) cast this net wider still to include societal and environmental risks. However, the likelihood of incidence, the significance of the harm and the timing of such harms can be harder to predict, quantify and mitigate.

We are fuelling the potential for an adversarial climate (Israel et al., 2016) if we fail to provide researchers and our research ethics reviewers with guidance on how to approach such matters.

Human research ethics committees, guided by the frameworks in which they function, focus on immediate risks directly to the participants in a project. For example, the National Statement requires committees to be satisfied that “the likely benefit of the research must justify any risks or discomfort to participants.” (NHMRC, 2007, 10). Committees can feel less equipped to tackle risks that can affect participants after the active phase of a project, such as harms to the reputation and standing of a group that can come from the research output that is distributed long after data collection and perhaps years after the research ethics review.

Harm can also impact upon populations and social/professional/community groups much wider than the actual participants. For, example, research into the academic performance of children from schools in a low socio-economic area if reported insensitively by researchers or, indeed by the media, can further stigmatise the kids, and harm the reputation of the schools and teachers. Again, work on the informal income of members of marginalised communities might be used subsequently by government to target tax avoidance by the already vulnerable. Lastly, research on the attitudes of residents in coastal communities to climate change and rising sea levels can detrimentally effect the value of surrounding land. Indeed, some review processes require researchers to consider the possibility of adverse findings (both medical and non-medical in nature). Although the National Statement, (NHMRC, 2007 p.13), recognises risks of this kind, it leaves unclear whose responsibility they are.

Focussing on the rights of individuals from a Western liberal democratic perspective is unlikely to be helpful in other contexts, such as an Aboriginal and Torres Strait Islander community, in a cultural context where a Confucian approach would be more appropriate (Katyal, 2011), or even in some organisational settings where accountability is partly achieved through openness to external scrutiny in the form of research and evaluation. As a result, there have also been prompts to consider risks to identifiable third parties, groups, institutions, communities (Weijer et al., 1999). Values and Ethics and the Guidelines for Ethical Research in Australian Indigenous Studies (GERAIS) do recognise such matters might be considered by some potential participant pools on a collective basis and perhaps with an knowledge of a history of research abuse and exploitation of their communities and this attention to collective interests can be echoed in other work on research ethics and Indigenous peoples around the world (Israel, 2015).

This is perhaps one of the reasons why some minorities have produced their own research ethics guidance documents (for examples, see Hudson et al. (2010), Nordling (2017) and Islamic Council of Victoria (2017)). The value of this kind of guidan this on some for the moments that it clarifies that it is on researchers that the important responsibility lies to foresee, mitigate and manage these risks.

Another example of deleterious impacts from research that might not be immediately obvious to researchers, research ethics reviewers or research office staff arises in the category of ‘dual use’ research (Miller and Selgelid, 2007). This where a technique, technology or an apparently non-military discovery can be used for military or terrorist purposes – sometimes with devastating effect. Initially, the concern of biomedical scientists, the issue has also troubled anthropologists, geographers, sociologists, political scientists and international relations experts in the face of overt or covert funding by military or intelligence agencies (Israel, 2015). One of the growing challenges for a significant proportion of such work (e.g. quantum computing, computer security/intrusion/hacking, smart materials, computer vision and energy storage) is the work will not typically require research ethics or any other form of independent review. The existing model of human research ethics review is initially attractive as a response, but some reflection will quickly show that ethics committees are not likely to possess the expertise/information to identify the dual use and the work may be occurring in disciplines that have not built their capacity to think through the ethics of working with human participants.

Australia has a strengthened export control framework with regard to security classification, Defence Department permits/approvals and other requirements (e.g. data security). Many Australian universities have established dedicated teams and processes for this particular area of concern. It remains an area of community concern (see Hamilton and Joske, 2017). Such controls involve balancing academic freedom, a commitment to open science and the value of scientific discovery against (inter)national security, trade and diplomatic interests. Such a balancing exercise is plainly beyond the capacity required for human research ethics review, so that the responsibility needs to rely on another mechanism.

The implications of all of this are not trivial. This all requires a change in thinking for researchers, institutions, funding bodies, learned academies and regulators. Our attention to the potential harms from a project needs to encapsulate research outputs, impacts upon communities, persons who were not direct participants in the project as well as national interests. At the same time, the consideration of a project vis-à-vis the ethical principle of research merit needs to include broader societal benefits and contributions to knowledge that might also involve a much wider group and a longer timeframe than the ones to which we are accustomed. However, in order to reach a more sophisticated analysis of the balance between potential harms and benefits, we need to more clearly allocate responsibility for such risks and devise mechanisms that reassure the community that these responsibilities have been fulfilled.

In our view, merely widening the scope of the responsibilities of human research ethics committees to address all these risks could not only exacerbate the propensity for risk aversion, but could also distort their important focus on the welfare of research participants. The current review system needs to find ways of working constructively with other processes that build the capacity of researchers and their institutions to work with these broader risks and benefits.

Institutions must have resource materials for researchers and research ethics reviewers that have the primary objective of resourcing reflective practice and building expertise in risk assessment and mitigation. Researchers must recognise these matters as their primary responsibility and research ethics reviewers must focus upon facilitation not enforcing compliance. We have written about how institutions can implement such an approach (Israel and Allen, in press).

In short, we cannot afford to ignore these challenges. Instead, we should take innovation seriously and seek constructive solutions.

References

Allen, G. and Israel, M. (in press, 2018) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R. and Tolich, M. (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage.

Hamilton, C. and Joske, A. (2017) Australian taxes may help finance Chinese military capability. The Australian. http://www.theaustralian.com.au/news/inquirer/australian-taxes-may-help-finance-chinese-military-capability/news-story/6aa9780c6a907b24993d006ef25f9654 [accessed 31 December 2017).

Hudson, M., Milne, M., Reynolds, P., Russell, K. and Smith B. (2010) Te Ara Tika. Guidelines for Māori Research Ethics: A Framework for Researchers and Ethics Committee Members. http://www.hrc.govt.nz/sites/default/files/Te%20Ara%20Tika%20Guidelines%20for%20Maori%20Research%20Ethics.pdf (accessed 29 December 2017).

Islamic Council of Victoria (2017) ICV Guidelines for Muslim Community-University Research Partnerships. http://www.icv.org.au/new/wp-content/uploads/2017/09/ICV-Community-University-Partnership-Guidelines-Sept-2017.pdf (accessed 29 December 2017)

Israel, M. (2015) Research Ethics and Integrity for Social Scientists: Beyond Regulatory Compliance. London: Sage.

Israel, M., Allen, G. and Thomson, C. (2016) Australian Research Ethics Governance: Plotting the Demise of the Adversarial Culture. In van den Hoonaard, W. and Hamilton, A. (eds) The Ethics Rupture: Exploring Alternatives to Formal Research-Ethics Review. Toronto: University of Toronto Press. pp 285-316. http://www.utppublishing.com/The-Ethics-Rupture-Exploring-Alternatives-to-Formal-Research-Ethics-Review.html

Katyal, K.R. (2011) Gate-keeping and the ambiguities in the nature of ‘informed consent’ in Confucian societies. International Journal of Research & Method in Education 34(2): 147-159.

Miller, S. and Selgelid, M. (2007) Ethical and philosophical consideration of the dual use dilemma in the biological sciences. Science and Engineering Ethics 13: 523-580.

NHMRC (2007a) National Statement on Ethical Conduct in Human Research. http://www.nhmrc.gov.au/guidelines-publications/e72 (accessed 29 December 2017).

NHMRC (2007b) Australian Code for the Responsible Conduct of Research. http://www.nhmrc.gov.au/guidelines-publications/r39 (accessed 29 December 2017).

Nordling, L. (2017) San people of Africa draft code of ethics for researchers. Science, March 17. http://www.sciencemag.org/news/2017/03/san-people-africa-draft-code-ethics-researchers (accessed 29 December 2017).

Weijer, C., Goldsand, G. and Emanuel, E.J. (1999) Protecting communities in research: Current guidelines and limits of extrapolation. Nature Genetics 23: 275-280.

Contributors
Dr Gary Allen
Senior consultant | AHRECS | Gary’s AHRECS biogary.allen@ahrecs.com

Prof. Mark Israel
Senior consultant | AHRECS | Mark’s AHRECS biomark.israel@ahrecs.com

This post may be cited as:
Allen G. and Israel M. (2018, 1 February 2018) What’s at risk? Who’s responsible? Moving beyond the physical, the immediate, the proximate, and the individual. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/whats-risk-whos-responsible-moving-beyond-physical-immediate-proximate-individual

Magical incantations and the tyranny of the template0

 

Building the Conversation

This month’s addition to the Building the Conversation series reflects upon how institutional template consent material can have odd results/ill-suited/nonsensical consequences.

It is widely accepted that human research ethics committees (HRECs) devote much of their time to the review of plain language statements or participant information and consent forms (PICFs). It should be noted that, unlike the US, Australia’s human research ethics arrangements have not been enacted into law. While chapter 2.2 of the National Statement does identify some required components of a consent strategy, the number and specificity of its provisions are much less than those often demanded by Australian research ethics committees. Historically, this amount of attention may have been due to the fact that, without guidance or experience, researchers devised their own PICFs, resulting in a possibly bewildering variety of structure, grammar and expression. In more recent decades, the focus of pharmaceutical sponsors on maximising disclosure has caused much of the increased length and detail.

Cartoon female figure holding a massive consent form with a confused expression.Probably in response to this variety and the increasing time devoted to review and the often detailed and even pedantic correction, HRECs hit on the idea of providing templates or standard forms for researchers to follow. The likely purpose behind these initiatives was to reduce the variety of PICFs and so in turn reduce the time that committees spent on them, correcting spelling, grammar and adding information the committees saw as being key to informing potential participants. An implicit message in the provision of standard forms and templates was that if researchers used these forms, it was more likely that the forms, and the projects, would be approved.

Ironically, the use of standard forms and templates may have generated perverse consequences, reversing the problem that they were designed to address. Frequently, templates contain expressions that, in the context of the research project under review, become meaningless, implausible or at least ambiguous so that committees increasingly need to ask researchers to clarify how commonly used terminology fits their project in order to provide appropriate disclosure to potential participants. Some phrases appear to be used like magical incantations to ward off the evil eye of the reviewer.

Here are some recurrent examples. We encourage readers to add to this list through the discussion forum, but not in a way that ridicules researchers. While there may be some comfort in acknowledging that the experience is shared, we would like to support better practices.

1. Participants can withdraw at any time. Although these words are a response to the National Statement (paragraph 2.2.6 (g)), they are often ambiguous or meaningless if not further explained. Accordingly, HREC members tire of asking (and researchers of responding) how this can be the case when participants and their information are not identified, either on collection or when participants’ data are merged. Often, the issue of whether participants who do withdraw can withdraw the data is also left open and needs clarification.

2. Counselling will be available or participants may contact Lifeline or Beyondblue. Again, a response to the National Statement (paragraph 2.2.6(c)), the context of particular projects demands some explanation: what counselling, by whom will it be provided, will they be independent from the research team and do participants have ready access to suitable communication technology etc.

3. All your information will be kept confidential. Again, although a response to the National Statement (paragraph 2.2.6 (f)), the statement is often inadequate because participants are likely to understand confidentiality to mean something like secrecy. At its broadest, confidentiality in this context means that the information will be used for the purposes of the research project but, without further consent, for no other purpose. So understood, consent would permit the use of participants’ information in the publication of results and outcomes of the research. However, it is more likely that participants understand this to mean no more that only research team members will have access to information, which is also incomplete. However, the major shortcoming of this statement is the lack of detail – a description of the manner in which participants’ information will be collected, stored, analysed and used is most likely to provide clarity: facts are usually better than assurances.

4. All responses will be anonymous. This may also be intended as a response to the same National Statement paragraph as in 3, but it suffers from a similar degree of ambiguity. Sometimes, HREC members find it necessary to ask how information collected in a face-to-face interview can be anonymous, a question unlikely to please a researcher who has carefully planned how to conceal the identity of interviewees in the way that interview data is analysed and stored. Sometimes, responses cannot be anonymised either because of the process of collection – a focus group, for example – or because there are not that many alternative personal meanings for an “anonymous” description (e.g. senior Australian politicians describing their former careers as a merchant banker, journalist and lawyer). Sometimes, participants do not want to be anonymised and failing to allow for identification precludes this possibility and denies a participant’s choice.

5. All research information will be kept in a locked filing cabinet in a locked office. This is, as all the previous examples, a response to the National Statement, and where applicable, to the mandates of the Good Clinical Practice guidelines, but when it is used in relation to a project that collects and stores data digitally, it is simply irrelevant and entirely inadequate as a description of secure storage of such data. (In our experience, the use of digital data has lead to a decline in the use of this outdated expression, but it still recurs). It can also offer a shield to processes that necessitate data transfer across borders, sometimes between field sites and the research base, sometimes between multinational collaborators, and sometimes just because of naivete in relation to cloud-computing.

Some HRECs have adopted templates that look like they were drafted by a group of contract lawyers after a long lunch. One could imagine them saying to each other ‘go on, add those four paragraphs about liability for reputational damage. See if anyone notices’. Such examples, however jocular, usually reflect the fact that boilerplate language is used in contract to protect the drafting party, not to facilitate communication. We have observed some reactions that suggest the carefully crafted language can cause derision and/or be seen as a ‘do not sue us’ exercise.

Some HRECs adopt and police expression preferences, for example, participants should only be ‘invited’ to participate and not ‘requested’ etc; researchers should refer to ‘participants’ rather than ‘subjects’ even when such a term would simply mask a research design that provides for no meaningful participation; these idiosyncrasies can be particularly frustrating for researchers conducting multisite projects.

Use of templates and standard forms risks incomplete and even misleading communication and can lead to apparent pedantry in HREC responses. Use of templates with groups of participants for whom such a template is inappropriate because of their level of literacy, language impairment, cultural emphasis on oral provision of information or distrust of official forms, also undermines any effort to gain real consent rather than just documenting apparent compliance.

Perverse consequences can be reduced if not eliminated with a focus on the purpose of these documents and a preference for short descriptions of how researchers conduct research, collect, store, analyse and destroy data rather than bland assurances that participants’ expectations will be addressed.

A significant question that remains largely unasked and unanswered is whether the consent strategies that are based upon a review body’s template actually facilitate the informed and voluntary consent of potential participants? How are the language and objectives of such consent processes actually perceived? There has been research on related questions of the effectiveness of consent strategies more generally, in both social science and clinical research, but it is not clear whether the insights gained in these studies have informed the development of templates. The growth of consumer groups focussing on specific health conditions offer opportunities for collaborative development of templates more likely to be effective.

Good guides implement principles. Accordingly, good consent guides implement the principle of respect: for participants, researchers and HREC members: respect for participants’ capacity and freedom to decide about participation; respect for researchers’ expertise to devise clear means of informing participants and respect for HREC members’ ability to recognise specific contexts of proposed research to which applications apply and review those accordingly.

The following advice in the National Statement, at page 7, about its use, applies equally to the use of consent templates:
“These ethical guidelines are not simply a set of rules. Their application should not be mechanical. It always requires, from each individual, deliberation on the values and principles, exercise of judgement and an appreciation of context.”

Acknowledgement

With grateful thanks to Mark Israel and Nik Zeps for their input.

Contributors
Colin Thomson – Senior Consultant, AHRECS | AHRECS biocolin.thomson@ahrecs.com

This post may be cited as:
Thomson C. (2017, 22 December 2017) Magical incantations and the tyranny of the template. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/magical-incantations-tyranny-template

Ethical use of social media as a recruitment tool0

 

Building the Conversation

From this month we will start including posts about the ethical design of human research. Our intent is not to present these ideas as the definitive or only way to approach a particular challenge/need but instead as prompts to get us all – participants, researchers, reviewers, regulators, administrators and other stakeholders – discussing useful and helpful approaches to the design, research ethics review and conduct of human research.

There are numerous reasons why social media can appear an attractive way to reach potential participants – it may be free or at least relatively inexpensive, it is increasingly ubiquitous across a range of Australian age groups (Sensis, 2017), and can be a powerful way to build an ongoing connection with a cohort of potential participants.

A recent issue of The American Journal of Bioethics focussed on the ethics of using social media as research platforms. An article by Luke Galinas and his colleagues (Galinas et al., 2017) noted the lack of resources and regulatory guidance in the United States on the use of social media as a recruitment tool. They concluded that this was a significant problem since, for all its benefits, use of social media is not without ethical and practical challenges and traps. Fortunately, these are not insurmountable. Galinas’ article explored how biomedical researchers might respond in the United States by attending to the issues of researcher transparency and respect for the privacy of participants; in this blog post, we provide advice for Australian researchers and reviewers in an effort to stimulate further discussion between them.

Excluding some potential participants – The penetration of social media platforms across all age groups of the Australian population over the last ten years has been truly remarkable. There do remain, however, some significant differences on the extent of usage depending on age, geographic location and socio-economic status. Consequently, open recruitment via social media may skew a participant pool towards area where social media use is more prevalent and may inadvertently exclude some groups of people with perspectives, views or voices that might undermine the value of a project’s finding.

Platform differences and exclusion – Not every social media platforms had the same user demographics; someone who uses social media 15 times per day may only be frequenting one platform. There is no single platform that is used by most social media users. Indeed, even platforms such as Facebook seem to be used more by a particular age-range of people within the Global North. Other countries have their own platforms that are heavily used within the region (e.g. China – WeChat (微信; Wēixìn) and Russia – VK social media (Vkontakte) and Odnoklassniki), but hardly ever by people outside that region.

Privacy rules and concerns – Privacy concerns are amongst the more significant reasons why some people do not currently use social media (Sensis, 2017). Indeed, many users do not understand the privacy rules of their preferred platform(s) and remain concerned about privacy. One large survey conducted by Evans et al. (2015) suggested that concern was greater among younger and more frequent users.

Comments from participants and others – Enabling participants to comment on the recruitment social media pages for a project might be an effective way to engage with potential participants. However, there are important reasons for caution about allowing participants to comment on such pages as they might expose themselves to risk. Individuals might divulge whether they are participants or were excluded by the screening tool. In addition, they might distort the data collected from others by prompting particular responses to their own comments.

Pseudonyms and de-identification –The presumptive remedy to many social media challenges is to delete, modify or otherwise obfuscate personal identifiers such as user names. However, some platform rules often specifically preclude such an approach (e.g. Twitter treats any such de-identification as a copyright concern). Furthermore, modifications of comments or descriptions raises at least the possibility the researcher fabricated or falsified data (much as occurred in Alice Goffman’s offline study, see Neyfakh, 2015).

Recruitment materials – Many national human research ethics arrangements, such as Australia’s National Statement on Ethical Conduct in Human Research specify that review bodies must consider and first approve recruitment materials, including the text of posts to go on a social media page. In most cases, this role will be delegated to the Chair (for executive review) or the Ethics Officer (for administrative review). The rigour and substance of this review should be proportionate to the risks and ethical sensitivities of a project. The need and purpose of this review reflects the potential for risks, privacy and other human research ethics matters that can be associated with a project’s recruitment strategy.

The application for research ethics review should cover the above matters and explain why the applicant believes the proposed approach is ethical, appropriate, respectful and justified. Such matters may also need to be discussed in the consent (if not the recruitment) materials. Similarly, research ethics reviewers should expect such a justification to be provided, be open to and accepting of innovation, offer praise where due, and share their thinking where uncomfortable with a proposed approach.

Like most topics in human research ethics, there is no single ‘correct’ approach with regard to recruitment and social media. Ethical research may be best pursued through reflection and collegial discussion.

References

Evans H, Ginnis S and Bartlett J (2015) #SocialEthics: A guide to embedding ethics in social media research.

Gelinas L. et al. (2017) Using Social Media as a Research Recruitment Tool: Ethical Issues and Recommendations. The American Journal of Bioethics, Vol. 17, No. 3. DOI: 10.1080/15265161.2016.1276644

Neyfakh, L. (2015) The Ethics of Ethnography. Slate Magazine. Retrieved 8 November 2017, from http://www.slate.com/articles/news_and_politics/crime/2015/06/alice_goffman…

NHMRC (2007) National Statement on ethical conduct in human research. http://www.nhmrc.gov.au/guidelines-publications/e72.

Sensis (2017) Social Media Report 2017. Retrieved from: https://www.sensis.com.au/asset/PDFdirectory/Sensis_Social_Media_Report_2017-Chapter-1.pdf (accessed 7 November 2017)

Other reading

Chamber C (2014, 1 July) Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach? The Guardian. Retrieved from http://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach (accessed 8 November 2015)

Leetaru K (2016, 17 June) Are Research Ethics Obsolete in the Era of Big Data? Forbes/Tech
http://www.forbes.com/sites/kalevleetaru/2016/06/17/are-research-ethics-obsolete-in-the-era-of-big-data/#1a083ad31cb9

Contributors
Dr Gary Allen | Senior Consultant AHRECS | Gary’s AHRECS biogary.allen@ahrecs.com

Prof. Mark Israel | Senior Consultant AHRECS | Mark’s AHRECS biomark.israel@ahrecs.com

This post may be cited as:
Allen G. and Israel M. (2017, 20 November 2017) Ethical use of social media as a recruitment tool Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/ethical-use-social-media-recruitment-tool

0