ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

Good practice

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research?0

 

The Australian Code for the Responsible Conduct of Research is presently under review. Issued jointly in 2007 by the National Health and Medical Research Council, the Australian Research Council and Universities Australia, the current code is a 41-page document divided into two parts. Part A, comprising some 22 pages, sets out the responsibilities of institutions and researchers for conducting sponsored research responsibly. Part B, comprising approximately 11 pages, provides advice on procedures for identifying and investigating instances of the conduct of research in which those responsibilities have not been fulfilled.

The current proposal is to replace this document with a five-page statement of eight principles of responsible research conduct and two lists of responsibilities, again of institutions and researchers, together with a 25-page guidance document (the first of several) of preferred procedures for the identification and investigation of research conduct that has not adhered to the responsibilities set out in the five-page code.

Among the innovations in these changes, other than a significant reduction in the size of the document, is the proposal that the expression ‘research misconduct’ not be used in the guide on identification and investigation but be replaced by the expression ‘breach’. An important reason given for this proposal is the avoidance of conflict with the requirements of institutional enterprise bargaining agreements (EBAs).

The scale of the proposed changes is likely to generate extensive debate and this will have been disclosed in the course of the consultation period that closed earlier this year. The consultation process conformed to the minimal requirements of sections 12 and 13 of the NHMRC Act. This is a process that publicises, by formal means, drafts of proposed changes to which responses are sought. Current practice is to prefer provision of responses by electronic means and to require responders to answer questions determined by the Council. The passivity and formality of the process tends to attract and privilege better resourced interests. In some of the published debate that occurred during the consultation period, there was much attention to the change in scale and to the proposal not to refer to research misconduct but only to breach. This level of discussion risks ignoring several underlying systemic questions, or assuming the answers to them. It is the purpose of this brief opinion to tease out these questions.

The key premise of these remarks on the existing Code and any revision is that the Code constitutes a form of regulation of research conduct. With this premise comes a centrally important question: what are the aims of the regulation of this activity?

The apparent aims of the revision are the definition of responsible research conduct and relevant responsibilities, the identification, disclosure and investigation of failures to conduct research responsibly.

Underlying these aims lie some broader and deeper considerations. These include whether the purpose to be served by regulation of research is to:

  • protect the reputation of research;
  • prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research;
  • protect the reputation of research institutions; to prevent the waste – or even the risk of waste – of public research funds;
  • penalise those who fail to fulfil their research responsibilities, whether the failures are on the part of institutions or individual researchers;
  • protect the public interest in research by promoting productive use of public research funds, and rewarding responsible researchers and institutions.

It is a regulatory situation not unlike that which faced environmental protection through the 1990s and later and other areas such as oil rig and building safety in the UK. One lesson from these experiences is that where the aims of regulation are the protection of the environment, the safety of buildings or oil rigs, they are more likely to be achieved by giving those who conduct relevant activities the opportunity to devise their own methods to achieve those regulatory aims, methods that are then assessed by a responsible authority against a set of standards. The shift from the tight prescription of safety standards to some form of well-defined and supervised self-regulation appears to have been successful in achieving regulatory aims.

The choice of which of the above purposes is to be served will have a direct and profound effect on the methods to be used. For example, if the purpose were the protection of the reputation of research institutions, it would not be surprising to extend a significant degree of autonomy to institutions to set up their own procedures and methods for promoting responsible conduct and so establishing their good reputation. However, there would be an incentive for institutions not to publicly disclose instances of irresponsible research but to manage these institutionally. Reliance on the need to conform to enterprise bargaining agreements might lend support to justification of such non-disclosure.

If the purpose were to penalise those institutions or researchers who fail to fulfil relevant responsibilities for responsible research conduct, the system would need to define those responsibilities with some precision, so that the definitions could be made enforceable, and to establish an agency with appropriate investigation powers and sanctioning authority to identify, investigate and reach decisions as to whether relevant responsibilities had or had not been fulfilled.

A relevant regulatory model may not be that of criminal prosecution but rather of corruption investigation. There is a public interest that motivates the establishment and operation of anti-corruption agencies. The outcomes of their enquiries can lay the foundation for individual punishment of those found ‘guilty’ of corrupt behaviour, and those proceedings are then taken up by different state agencies. Research integrity policy can be seen to have similar aims: first, to protect the public interest by empowering an independent agency to uncover corrupt conduct, and, second, following such a finding, to prosecute individuals by a separate process. A research integrity agency could be given the task of investigating and finding research misconduct, leaving to employers of those individuals the responsibility to impose consequences. Although remaining autonomous in following their own procedures, and so conformingh to EBAs, institutions would be likely to find it difficult to conceal the process because of the public finding of research misconduct that they are implementing.

The debate so far appears to have left most of these underlying questions either unanswered or to have assumed answers to them. Because this has not been explicit, those answers are unlikely to be consistent. For example, the chosen terminology discloses some of these assumptions. The responsibilities described in the five-page code are in very general form that would present considerable difficulties if they were to be used to determine whether they had been fulfilled. For example, what evidence would constitute a failure on the part of an institution to fulfil the obligation to develop and maintain the currency and ready availability of a suite of policies and procedures which ensure that institutional practices are consistent with the principles and responsibilities of the Code? Or, what evidence would constitute a failure on the part of a researcher to fulfil the obligation to foster, promote and maintain an approach to the development, conduct and reporting of research based on honesty and integrity? The very breadth and generality of the language used in these statements suggest that the purpose is not their enforcement.

A further example is the proposal not to use the expression research misconduct in the document, but to refer to breaches of the Code. The language of breach is applied better to duties, rules or standards that are drafted with the intent of enforcement so that it can be clear when evidence discloses a breach and when it does not. Casting the substantive document in the form of responsibilities makes this difficult. In common language, responsibilities are either fulfilled or they are not and where they are not, it is common to speak of a failure to fulfil the responsibility rather than a breach. The use of the language betrays a confusion of underlying purposes.

The advocates of an enforcement approach have argued for a national research integrity agency, like that in some other Western nations. There may, however, be a simpler, more politically and fiscally feasible model available.

If the underlying purposes are to protect the reputation of research as a public interest, to prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research and to prevent waste or the risk of waste of public research funds, then the mode of regulation would be more likely to resource the training of researchers, the guidance of institutions in establishing appropriate research environments and the public promotion of responsible and effective research. The response to irresponsible research conduct would be directed at the withdrawal from the public arena of unsupported and inaccurate results, appropriate disclosure of these (e.g. to journal editors and research funding agencies) and appropriate apologies from responsible institutions and researchers supported with undertakings for reform of faulty procedures and practices.

In implementing these purposes, it would not be surprising for the system to give significant authority to both public research funding agencies. This could include, for instance, authority to ensure that institutions seeking access to their funds establish appropriate procedures to ensure responsible research conduct, including sufficient and sustained training of researchers, adequate resources and research facilities and appropriate auditing and reporting of research conduct. Agency authority could also include an entitlement to establish not only whether researchers who seek or have access to research funding have research records free of irresponsibility, but also that eligible institutions did not have current employees with such records.

Access to research funding has been a potent motivator in the institutional establishment of human research ethics committees, both in the United Kingdom, as Adam Hedgecoe (2009) has shown, and in Australia where the NHMRC’s 1985 decision required institutions to establish institutional ethics committees if they wanted access to research funds with which to conduct human research. In both cases, the decisions were followed by a notable increase in the number of institutional research ethics committees.

An approach that actively promotes responsible research practice may be more likely to achieve wider conformity with good practice standards than a focus on identifying, investigating and punishing failures to meet those standards. If so, the first better practice guide would be how to promote responsible conduct of research; it would not be how to identify investigate and respond to poor research conduct. Indeed, responsible institutions could pre-empt any such requirements by unilaterally setting up programs to instruct researchers in responsible conduct, train and embed research practice advisers in strategic research disciplines, reward examples of responsible research that enhance both researcher and institutional reputations and establish a reliable and comprehensive record keeping system of research. This is an argument that Allen and Israel (in press) make in relation to research ethics.

Australia has an opportunity to adopt a constructive and a nationally consistent approach to the active promotion of good research practice. It would be more likely to achieve this with a code that was not constrained by institutional self-interest nor confined by a punitive focus.

References

Allen, G and Israel, M (in press, 2017) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R and Tolich, M (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage.

Hedgecoe, A (2009) A Form of Practical Machinery, The Origins of Research Ethics Committees in the UK, 1967–1972, Medical History, 53: 331–350

Contributor
Prof Colin Thomson is one of the Senior Consultants at AHRECS. You can view his biography here and contact him at colin.thomson@ahrecs.com,

This post may be cited as:
Thomson C. (2017, 22 May) Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/cracking-code-revised-australian-code-likely-ensure-responsible-conduct-research

Ethical Self-Assessment: Excellence in Reflexivity or Corporatisation Gone Mad?0

 

Research ethics and integrity have always been at the forefront of my work, not only because the issues which I explore (self-injury, disability, gender and sexuality) are personal, sensitive and often stigmatised topics, but also because as a disabled, feminist researcher I have first-hand experience of the ways in which power, inequality and appropriation are often enmeshed in research methods and outputs. Conventional ethical protocols which originate in medical guidelines struggle to fully grasp and incorporate such ethical issues, as well as the dilemmas which emerge from social research more broadly. Ethical protocols rarely prompt a researcher to critically examine how issues such as power and marginalisation play out in social research, or even how to address specific issues emerging from their own project, such as how to respond to requests for specific information as in Anne Oakley’s (1981) now infamous research with first time mothers. Ethical review more often consists of tick-box protocols, which ultimately function to restrict who and what can be researched rather than to promote ethical skills, competencies and practices (see Inckle, 2015).

This mismatch between my own ethical sensibilities and the conventions of research ethics were so vast that, during my PhD research, I struggled to conceive how any research could ever be fully ethical and I became stymied with anxiety and doubt (see Inckle, 2007). Happily, since then, I have joined a research ethics committee, taught research methods and ethics, conducted, supervised and even participated in social research. As a result, I have become more reconciled with (although no less sensitive to) the possibilities of research being both an ethical and positive experience for all those involved – albeit when based on a reflexive, ethical sensibilities rather than rigid, pre-defined protocols.

Nonetheless, when I joined my current institution and discovered that ethical review operated on a self-assessment basis http://www.lse.ac.uk/intranet/researchAndDevelopment/researchDivision/policyAndEthics/ethicsGuidanceAndForms.aspx my first response was to laugh, a lot. Isn’t the whole point of ethical review, I chortled, to provide oversight and accountability via external reviewer/s? How does simply completing a self-assessment form ensure ethical competency? Isn’t this just another example of the corporatized university gone mad, where academics take on more and more administrative duties in a role of ever-increasing responsibilities and ever-diminishing autonomy?

However, with time, reflection and some experience – all of which are important ethical competencies! – my perspective on ‘ethical self-assessment’ has radically shifted. Firstly, self-assessment is not really a full description of this ethical review process. Student researchers require formal ethical validation from their supervisor, who acts as a proxy for the institution in granting approval and, in the case of staff research projects, the line-manager takes on this role. Furthermore, in certain situations, such as when required by an external funder or participating body, the researcher is compelled to present their work before a university ethics committee proper.

Secondly, while the ethical ‘self-assessment’ form requires the respondent to answer a number of fairly standard questions about their research project – including, whether deception will be used, are the participants ‘vulnerable’, will sensitive/personal issues be explored – the process nonetheless allows for nuanced and discipline-specific accountability. For example, rather than a ‘yes’ to any of these questions rendering the research unethical and in need of redesign, the researcher is invited to complete another section of the form providing further information which contextualises the project and outlines protective protocols. What is most important, is that these justifications and protections are reviewed in a discipline specific context, thus moving the entire process away from universalised assumptions and locating it within specific field of the researcher. For example, in a medicalised context a non-clinician interviewing those who are defined as ‘vulnerable’ by virtue of their experience of disability and/or self-injury would be considered highly problematic. Similarly, an insider-researcher with shared experience of such a ‘health’ or disability experience would be considered compromised in their role and unable to ‘objectively’ and reliably conduct the research. However, from a social sciences (and rights-based) perspective, using these kind of labels to position certain individuals as compromised and/or inadequate researchers is in itself unethical and discriminatory.

Indeed, ethical ‘self-assessment’ has proven beneficial for my current research regarding the health, identity and social impacts of cycling for people with physical disabilities, including its impacts on their experience of themselves as able/disabled. In a standardised context it is likely that a number of ethical problems would be highlighted with this project: exploring sensitive issues amongst a ‘vulnerable’ group; an insider-researcher (I am a disabled cyclist); and quite possibly the assumption that the topic is so anomalous as to not justify the research at all – it is a commonplace assumption (especially among medical professionals) that people with physical disabilities cannot cycle, despite it being significantly easier than walking or wheelchair propulsion for many disabled people http://www.wheelsforwellbeing.org.uk/. However, ethical ‘self-assessment’ enabled me to position myself, my research participants and the value of the research within a critical social science and rights-based perspective which locates disability as a social identity rather than an individual vulnerability. However, this does not mean that I have avoided thinking clearly and carefully about the ethical protocols. I have taken time to consider the research, it’s potential impacts at the individual, social and policy levels, and to work to ensure that it is a positive and empowering experience for all those involved (including me). I have also developed my information, consent and researcher commitment forms in line with best practice in feminist and sensitive research (Byrne, 2000; Inckle, 2007; 2015).

Overall then, my experience suggests that my initial incredulous laughter at the thought of ethical self-assessment was misplaced. In an era of increasingly regimented ethical protocols which unilaterally apply limited, discipline-specific assumptions across the entire research community, and thereby curb the possibilities of who can conduct research, about which topics and with whom, then discipline-specific ethical self-assessment provides a new opportunity for contextualised ethical review. This kind of approach, coupled with a nuanced, reflexive approach to the development of ethical competencies could offer a significant way forward for ethical review in the social sciences.

References

Byrne, A (2000) Researching One An-Other, pp.140-166 in A Byrne and R Lentin (eds) (Re)Searching Women: Feminist Research Methods in the Social Sciences in Ireland. Dublin: Institute of Public Administration.

Inckle, K (2015) Promises, Promises… Lessons in Research Ethics from the Belfast Project and ‘The Rape Tape’ Case, Sociological Research Online 20(1): 6 http://www.socresonline.org.uk/20/1/6.html

Inckle, K (2007) Writing on the Body? Thinking Through Gendered Embodiment and Marked Flesh. Newcastle-upon-Tyne: Cambridge Scholars Publishing

Oakley, A (1981) Interviewing Women: A Contradiction in Terms, pp.30-61 in H Roberts (ed) Doing Feminist Research. London: Routledge.

Contributor
Dr Kay Inckle
Course Convener in Sociology
LSE
Blog/Bio | K.A.Inckle@lse.ac.uk

This post may be cited as:
Inckle K. (2017, 24 April) Ethical Self-Assessment: Excellence in Reflexivity or Corporatisation Gone Mad?. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/ethical-self-assessment-excellence-reflexivity-corporatisation-gone-mad

Intuitive Research Ethics Training for Novices0

 

The pedagogy of teaching research methods, let alone research ethics, is an under-researched field. In this blog entry, two postgraduate students reflect on their classroom experience where our lecturer engaged his students in a qualitative research ethics course, using two novice ethnographers’ candid empirical studies as the basis for discussion. While it is more usual for students to be schooled in ethics via lectures and seminars, what was unusual in this course was assigning the readings without first introducing the students to ethical concepts such as autonomy, do no harm, respect for participants or beneficence.

After Rachel and Louisa introduced ourselves to the other three members of the course, the lecturer placed his audio recorder on the table and activated the red light before introducing the course. In the midst of the awkward silence, we remember looking over to the other students, feeling confused and uneasy. Little did we realise at the time that our lecturer was reproducing the Asch conformity experiment. As the lecturer outlined the course goals and the assessment, none of us were listening, still blinded by the red glare and feeling unusually perturbed. Finally, after a few minutes one of us broke the ice asking the obvious question, “is that ethical?” The lecturer seemed perplexed. Another student translated, “she means do you need our consent for the audio recorder?” “What do you mean by consent?” he asked. Thus began a very different way of learning about research ethics. The lecturer didn’t instruct us on ethics, he believed each person’s moral compass was their guide. His role was provocateur, the class’s role was to locate ethical dilemmas in the readings presented, allowing us to solve them in situ. By asking the question “is this ethical” we had passed his first test. With our permission, the weekly classroom discussions were recorded, and our actual process of consent was part of learning by doing. The raw data for the co-authored journal article Teaching research ethics as active learning details our journey.

Our next substantive task asked us to review a newspaper article describing a situation where a researcher posed as a visiting academic and interviewed staff about their working conditions without informing them that he was their next Vice Chancellor (Lynley 2016).

Lynley, B. (2016, February 3). Lincoln University horrified after undercover encounter with new boss – Education – NZ Herald News. New Zealand Herald.

We remember thinking, “he should have told them that he was the preferred candidate for VC”. Concerned that this researcher failed to declare his prospective identity, we classified this act as a conflict of interest. It is only at that moment we realised the intentions of the lecturer in the opening moments of the class, he had tried to capitalise on a power differential implicit within our group between lecturer and students. The key learning here was to establish “power” as the primary ethical dilemma of research ethics for sociologists. We knew that had any member of our class objected to the recording of our discussions, the audio recorder would have been removed. Whereas with the scenario depicted above, the future Vice Chancellor failed to extend such an opportunity to his participants. In this way, our learning in this Qualitative Research Ethics class was incremental.

The lecturer then asked us to take the perspective of a resident in a community that both Venkatesh and Goffman describe and then share with the class any moments where we felt an unease with the relationship between researcher and researched.

Goffman, A. (2014) On the run: Fugitive life in an American city. Picador, New York.

Venkatesh, S. (2008). Gang Leader for a Day: A Rogue Sociologist Takes to the Streets. New York: Penguin.

Our responses, our learning are detailed in our article:

Tolich, M., Choe, L., Doesburg, A., Foster, A., Shaw, R. and Wither, D., 2017. Teaching research ethics as active learning: reading Venkatesh and Goffman as curriculum resources. International Journal of Social Research Methodology, pp.1-11.

The lecturer had two other unstated learning objectives. First, he wanted to illustrate the importance of formal ethics review as integral to the research process. Neither Goffman nor Venkatesh had sought formal ethics review and the class concluded each would have benefited from doing so. However, the ethics review process would have missed many of the “big ethical moments” that emerged while doing research in the field. The lecturer’s second objective was to encourage students to write about their big ethical moments, reflexively, and we did.

Looking back at our first day of graduate school, the presence of an active audio recorder succeeded in providing us with the framework necessary for learning qualitative ethics. The materials selected for this ethics class, mainly Venkatesh’s and Goffman’s work allowed us to take our gut feelings one step further, to discuss and debate the ethical dilemmas presented until we were able to reflexively understand that these social science researchers could improve on their practices. We were therefore able to move from ‘Ah! There is something wrong with this’ to the reasons why it was wrong and how it could have been done better. The critical thinking skills we established as ethics students not only allowed us to dissect the works we read, but helped us to apply these concepts to our own research practices.

Contributors

Louisa Choe holds a PhD scholarship in sociology at the University of Otago conducting a mixed methods analysis of “Do the poor pay more?”
louisa.choe@otago.ac.nz

Rachel Shaw holds a MA scholarship in gender studies at the University of Otago conducting an oral history of the experiences of lesbians during the 1970s and 1980s in New Zealand.
shara267@student.otago.ac.nz

This post may be cited as:
Choe L, and Shaw R. (2017, 16 March) Intuitive Research Ethics Training for Novices. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/intuitive-research-ethics-training-novices.

‘Except as required by law’: Australian researchers’ legal rights and obligations regarding participant confidentiality0

 

Anna Olsen, Research School of Population Health, ANU Julie Mooney-Somers, Centre for Values, Ethics and the Law in Medicine, University of Sydney
*Neither of us are lawyers and, as such, our interpretations are as social scientists and HREC members. Interested lawyers and legal scholars are encouraged to contribute!

Researchers’ promises of confidentiality are often easily and genuinely made. However, our experience in research ethics review (Julie through an NGO-run ethics review committee; Anna through formally constituted university and hospital human research ethics committees), in qualitative research and in teaching qualitative research ethics has led us to think about the limits of these promises.

Australian researchers generally rely on the National Statement (National Health and Medical Research Council, 2015) and Human Research Ethics Committees (HRECs) for guidance around ethical and legal conduct in research. For example, Chapter 4.6 in the National Statement notes that researchers may discover illegal activity and guides researchers and HRECs to consider what researchers might be obliged to disclose in a legal situation and how to best protect (and inform) participants of this threat to confidentiality.

The National Statement is currently under revision (National Health and Medical Research Council, 2016) and the review submitted for public consultation in late-2016 contains a proposal to include additional information on “Disclosure to third parties of findings or results” in Section 3 of the National Statement. Here the NHMRC explicitly state that: “There can be situations where researchers have a legal, contractual or professional obligation to divulge findings or results to third parties”. That is, researchers should concern themselves not only with the legal implications of revealing potential illegal activity, but any instance in which they may be asked to break participant confidentiality.

The recent review of the National Statement extends the NHMRC recommendations around potential data disclosure in a number of ways: it makes much more explicit that researchers (as opposed to HRECs or institutions) are responsible for understanding the risks to patient confidentiality: “researchers should be aware of situations where a court, law enforcement agency or regulator may seek to compel the release of findings or results”. Researchers are expected to anticipate legal risks to participant confidentiality by: identifying “(a) whether, to whom and under what circumstances the findings or results will be disclosed; (b) whether potential participants will be forewarned that there may be such a disclosure; (c) the risks associated with such a disclosure and how they will be managed; and (d) the rationale for communicating and/or withholding the findings or results and the relative benefits and/or risks to participants of disclosure/non-disclosure”. And, researchers should advise participants on legal risks to confidentiality and how they will be handled: “(a) have a strategy in place to address this possibility; (b) advise participants of the potential for this to occur; and (c) advise participants as to how the situation will be managed”.

For many researchers in health, legal risks are a very vague reality and legal intervention a remote threat. They may feel confident that their research does not and will not uncover illegal activity, or that their data would simply be irrelevant to a legal case. Or they may feel confident that they have taken sufficient steps to protect their participants’ confidentiality by following guidelines; researchers working in illicit drug use, for example.

Many Australian HRECs articulate the NHMRC guidelines on legal risks of disclosure to third parties by requiring that researchers inform participants that any data collected during research will kept confidential, “except as required by law”. In keeping with the ethical concept of informed consent, participants are thereby warned that researchers are not able to unconditionally offer confidentially. It has become clear to us that the intention of this phrase, to flag the legal limits of confidentiality, is not well understood by researchers (Olsen & Mooney-Somers, 2014).

The National Statement details some aspects of human research that is subject to specific statutory regulation however stresses that compliance with legal obligations is not within the scope of the National Statement: “It is the responsibility of institutions and researchers to be aware of both general and specific legal requirements, wherever relevant”. Moreover, in the document we are directed that it is not the role of a HREC to provide legal advice. It is relatively rare for Australian HRECs to provide explicit guidance on the relevant legal obligations for researchers, including: how they differ across jurisdictions; what protective strategies researchers could employ to better protect patient confidentiality; or how to best inform participants about the risks of legal action (Some useful HREC-produced resources are Alfred Hospital Ethics Committee, 2010; QUT Office of Research Ethics and Integrity, 2016) Criminology scholars have (unsurprisingly) considered these issues in their own field (Chalmers & Israel. 2005; Israel, 2004; Israel & Gelsthorpe, 2017; Palys & Lowman, 2014).

We believe there are real risks to participants, researchers and research institutions.

Recent international cases of research dealing with illegal activity becoming subject to legal action include The Belfast Project/The Boston Tapes (BBC News, 2014; Emmerich, 2016; Israel, 2014) and Bradley Garrett’s ethnographic work with urban explorers (Fish, 2014; Times Higher Education, 2014) (See also Israel & Gelsthorpe, 2017). On the whole, legal action was anticipatable in these cases as they involved illicit activities and the legal action was driven by law enforcement interest. In some instances, researchers took extensive steps to protect participant confidentiality. In other cases the promise of absolute confidentiality seems a little naïve (and in our opinion, perhaps negligent).

Perhaps of more concern are cases in which legal action was instigated by interested others, not law enforcement. Of particular interest to us are recent cases of tobacco companies using Freedom of Information laws in Australia to obtain research data from Cancer Council Victoria on young people’s attitudes to and use of tobacco, and an earlier attempt to seek data on adults from Cancer Council NSW (McKenzie & Baker, 2015; Schetzer & Medew, 2015). As these cases do not involve illegal activity, it is much less likely that researchers could have anticipated the specific legal actions that undermined participant confidentiality. (The tobacco industry has taken these actions in other countries (Hastings, 2015; McMurtrie, 2002)).

Our point here is that the promise of confidentiality should never be casually made. Researchers have an ethical obligation to think through what “except as required by law” may mean for each particular research project. Although it has been argued elsewhere that as professionals, researchers should be provided the same participant confidentiality rights as doctors and lawyers (Emmerich, 2016), the current state of affairs is that research data is not (necessarily) safe from legal, contractual or professional obligation to divulge findings or results to third parties.

References:

Alfred Hospital Ethics Committee. (2010, Updated September 2016). Alfred Hospital ethics committee guidelines: Research that potentially involves legal risks for participants and researchers. Retrieved from https://www.alfredhealth.org.au/contents/resources/research/Research-involving-legal-risks.pdf

BBC News. (1 May 2014). What are the Boston tapes? Retrieved from http://www.bbc.com/news/uk-northern-ireland-27238797

Chalmers, R., & Israel, M. (2005). Caring for Data: Law, Professional Codes and the Negotiation of Confidentiality in Australian Criminological Research. Retrieved from http://crg.aic.gov.au/reports/200304-09.pdf

Emmerich, N. (9 December 2016). Why researchers should get the same client confidentiality as doctors. Retrieved from https://theconversation.com/why-researchers-should-get-the-same-client-confidentiality-as-doctors-69839

Fish, A. (23 May 2014). Urban geographer’s brush with the law risks sending cold chill through social science. Retrieved from https://theconversation.com/urban-geographers-brush-with-the-law-risks-sending-cold-chill-through-social-science-25961

Hastings, G. (31 August 2015). We got an FOI request from Big Tobacco – here’s how it went. Retrieved from https://theconversation.com/we-got-an-foi-request-from-big-tobacco-heres-how-it-went-46457

Israel, M. (2004). Strictly confidential? Integrity and the disclosure of criminological and socio-legal research. British Journal of Criminology, 44(5), 715-740.

Israel, M. (6 May 2014). Gerry Adams arrest: when is it right for academics to hand over information to the courts? Retrieved from https://theconversation.com/gerry-adams-arrest-when-is-it-right-for-academics-to-hand-over-information-to-the-courts-26209

Israel, M., & Gelsthorpe, L. (2017). Ethics in Criminological Research: A Powerful Force, or a Force for the Powerful? . In M. Cowburn, L. Gelsthorpe, & A. Wahidin (Eds.), Research Ethics in Criminology and Criminal Justice: Politics, Dilemmas, Issues and Solutions. London: Routledge.

McKenzie, N., & Baker, R. (15 August 2015). Tobacco company wants schools survey for insights into children and teens. The Age. Retrieved from http://www.theage.com.au/national/tobacco-company-wants-schools-survey-for-insights-into-children-and-teens-20150819-gj2vto.html

McMurtrie, B. (8 February 2002). Tobacco companies seek university documents. Chronicle of Higher Education. Retrieved from http://www.chronicle.com/article/Tobacco-Companies-Seek/6959

National Health and Medical Research Council. (2015). National Statement on Ethical Conduct in Human Research (2007) Retrieved from https://www.nhmrc.gov.au/printpdf/book/export/html/51613

National Health and Medical Research Council. (2016). Public consultation on Section 3 (chapters 3.1 & 3.5), Glossary and Revisions to Section 5: National Statement on Ethical Conduct in Human Research (2007). Retrieved from https://consultations.nhmrc.gov.au/files/consultations/drafts/ns-section3-public-consultation.pdf

Olsen, A., & Mooney-Somers, J. (2014). Is there a problem with the status quo? Debating the need for standalone ethical guidelines for research with people who use alcohol and other drugs. Drug Alcohol Rev, 33(6), 637-642. doi:10.1111/dar.12140

Palys, T., & Lowman, J. (2014). Protecting research confidentiality: What happens when law and ethics collide. Toronto: Lorimer.

QUT Office of Research Ethics and Integrity. (10 Novembeer 2016). Participants and illegal activities. Retrieved from http://www.orei.qut.edu.au/human/guidance/illegal.jsp

Schetzer, A., & Medew, J. (20 August 2015). Cancer Council spends thousands fighting big tobacco over children’s survey data. The Sydney Morning Herald. Retrieved from http://www.smh.com.au/national/cancer-council-spends-thousands-fighting-big-tobacco-over-childrens-survey-data-20150820-gj3nh7.html

Times Higher Education. (5 June 2014). Place-hacker Bradley Garrett: research at the edge of the law. Retrieved from https://www.timeshighereducation.com/features/place-hacker-bradley-garrett-research-at-the-edge-of-the-law/2013717.article

Contributors

Anna Olsen is a Senior Lecturer at the Research School of Population Health, Australian National University. She leads a number of qualitative and mixed methods public health research projects, teaches qualitative research methods and supervises post-graduate students. Dr Olsen is an experienced member of formally constituted university and hospital human research ethics committees. https://researchers.anu.edu.au/researchers/olsen-phd-am

Julie Mooney-Somers is a Senior Lecturer in Qualitative Research in the Centre for Values, Ethics and the Law in Medicine, University of Sydney. She is the director of the Masters of Qualitative Health Research at the University of Sydney. An experienced qualitative researcher, teacher and supervisor, she has taught qualitative research ethics and sat on a NGO-run ethics review committee for six years. http://sydney.edu.au/medicine/people/academics/profiles/julie.mooneysomers.php and http://www.juliemooneysomers.com

This post may be cited as:
Olsen A, and Mooney-Somers J. (2017, 24 February) ‘Except as required by law’: Australian researchers’ legal rights and obligations regarding participant confidentiality. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/except-required-law-australian-researchers-legal-rights-obligations-regarding-participant-confidentiality

0