ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Good practice

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Australian Code 2018: What institutions should do next0

 

Gary Allen, Mark Israel and Colin Thomson

At first glance, there is much to be pleased about the new version of the Australian Code that was released on 14th June. A short, clear document that is based upon principles and an overt focus on research culture is a positive move away from the tight rules that threatened researchers and research offices alike for deviation from standards that might not be appropriate or even workable in all contexts.

The 2007 Code was rightly criticized on several grounds. First, weighing a system down with detailed rules burdened the vast majority with unneeded compliance for the recklessness and shady intentions of a very small minority. Second, there was reason to suspect the detailed rules did not stop the ‘bad apples’. Third, those detailed rules probably did not inspire early career researchers to engage with research integrity and embrace and embed better practice into their research activity. Finally, the Code did little to create an overall system able to undertake continuous improvement.

But, before we start to celebrate any improvements, we need to work through what has changed and what institutions and researchers need to do about it. And, then, maybe a quiet celebration might be in order.

Researchers have some fairly basic needs when it comes to research integrity. They need to know what they should do: first, as researchers and research supervisors in order to engage in good practice; second, if they encounter poor practice by another researcher; and, third, if other people complain about their practices.

The 2007 Australian Code offered some help with each of these. In some cases, this ‘help’ was structured as a requirement and over time was found wanting. The 2018 version appreciated that these questions might be basic but that the answers were often complex. The second and third questions are partly answered by the accompanying Guide to Managing and Investigating Potential Breaches of the Code (the Investigation Guide) and we’ll return to this. The answer to the first question is brief.

The Code begins to address responsibilities around research integrity through a set of eight principles that apply to researchers as well as their institutions: honesty; rigour; transparency; fairness; respect; recognition of the rights of Indigenous peoples to be engaged in research; accountability, and promotion of responsible research practices. Explicit recognition of the need to respect the rights of Aboriginal and Torres Strait Islander peoples did not appear in the 2007 version. There are 13 responsibilities specific to institutions. There are 16 responsibilities, specific to researchers, that relate to compliance with legal and ethical responsibilities, require researchers to ensure that they support a responsible culture of research, undertake appropriate training, provide mentoring, use appropriate methodology and reach conclusions that are justified by the results, retain records, disseminate findings, disclose and manage of conflicts of interest, acknowledge research contributions appropriately, participate in peer review and report breaches of research integrity.

In only a few cases might a researcher read these parts of the Code and conclude that the requirements are inappropriate. It would be a little like disagreeing with the Singapore Statement (the one on research integrity, not the recent Trump-Kim output). Mostly, the use of words like ‘appropriate’ within the Code (it appears three times in the Principles, twice in the responsibilities of institutions and five times in responsibilities of researchers) limit the potential for particular responsibilities to be over-generalised from one discipline and inappropriately transferred to others.

There are some exceptions, and some researchers may find it difficult to ‘disseminate research findings responsibly, accurately and broadly’, particularly if they are subject to commercial-in-confidence restrictions or public sector limitations, and we know that there are significant pressures on researchers to shape the list of authors in ways that may have little to do with ‘substantial contribution’.

For researchers, the Code becomes problematic if they go to it seeking advice on how they ought to behave in particular contexts. The answers, whether they were good or bad in the 2007 Code, are no longer there. So, a researcher seeking to discover how to identify and manage a conflict of interest or what criteria ought to determine authorship will need to look elsewhere. And, institutions will need to broker access to this information either by developing it themselves or by pointing to good sectoral advice from professional associations, international bodies such as the Committee for Publication Ethics, or the Guides that the NHMRC has indicated that it will publish.

We are told that the Australian Code Better Practice Guides Working Group will produce guides on authorship and data management towards the end of 2018 (so hopefully at least six months before the deadline of 1 July 2019 for institutions to implement the updated Australian Code). However, we do not know which other guides will be produced, who will contribute to their development nor, in the end, how useful they will be in informing researcher practice. We would hope that the Working Group is well progressed with the further suite if it is to be able to collect feedback and respond to that before that deadline.

There are at least nine areas where attention will be required. We need:

1. A national standard data retention period for research data and materials.

2. Specified requirements about data storage, security, confidentiality and privacy.

3. Specified requirements about the supervision and mentoring of research trainees.

4. A national standard on publication ethics, including such matters as republication of a research output.

5. National criteria to inform whether a contributor to a research project could or should not be listed as an author of a research output.

6. Other national standards on authorship matters.

7. Specified requirements about a conflicts of interest policy.

8. Prompts for research collaborations between institutions.

For each of those policy areas the following matters should be considered:

1. Do our researchers need more than the principle that appears in the 2018 Australian Code?

2. If yes, is there existing material upon which an institution’s guidance material can be based?

3. Who will write, consider and endorse the guidance material at a national or institutional level?

Many institutions will conclude it is prudent to wait until late 2018 to see whether the next two good practice guides are released and discover how much they cover. Even if they do so, institutions will also need to transform these materials into resources that can be used in teaching and learning at the levels of the discipline and do so in a way that builds the commitment to responsible conduct and the ethical imaginations of researchers rather than testing them on their knowledge of compliance matters.

Managing and Investigating Potential Breaches

The Code is accompanied by a Guide to Managing and Investigating Potential Breaches of the Code (the Investigation Guide). The main function of this Guide is to provide a model process for managing and investigating complaints or concerns about research conduct. However, before examining how to adopt that model, institutions need to make several important preliminary decisions.

First, to be consistent with the Code, the Guide states that institutions should promote a culture that fosters and values responsible conduct of research generally and develop, disseminate, implement and review institutional practices that promote adherence to the Code. Both of these will necessitate the identification of existing structures and processes and a thorough assessment to determine any changes that are needed to ensure that they fulfil these responsibilities.

This means that institutions must assess how their processes conform to the principles of procedural fairness and the listed characteristics of such processes. The procedural fairness principles are described as:

  • the hearing rule – the opportunity to be heard
  • the rule against bias – decisionmakers have no personal bias in the outcome
  • ‘the evidence rule – that decisions are based on evidence.

The characteristics require that an institution’s processes are: proportional; fair; impartial; timely; transparent, and confidential. A thorough review, and, where, necessary, revision of current practices will be necessary to show conformity to the Guide.

Second, when planning how to adopt the model, institutions need to consider the legal context as the Guide notes that enterprise bargaining agreements and student disciplinary processes may prevail over the Guide.

Third, the model depends on the identification of six key personnel with distinct functions. Some care needs to be taken to match the designated roles with the appropriate personnel, even if their titles differ from those in the model, in an institution’s research management structure. The six personnel are:

  • a responsible executive officer, who has final responsibility for receiving report and deciding on actions;
  • a designated officer, appointed to receive complaints and oversee their management;
  • an assessment officer or officers, who conduct preliminary assessments of complaints;
  • research integrity advisers, who have knowledge of, and promote adherence to, the Code and offer advice to those with concerns or complaints;
  • research integrity office, staff who are responsible for managing research integrity;
  • review officer, who has responsibility to receive requests for procedural review of an investigation.]

Last, institutions must decide whether to use the term ‘research misconduct’ at all and, if so, what meaning to give to it. Some guidance is offered in a recommended definition of the term but, as noted above, this will need to be considered in the legal contexts of EBAs and student disciplinary arrangements.

Conclusion

The update to the Code provides a welcome opportunity to reflect on a range of key matters to promote responsible research. The use of principles and responsibilities and the style of the document offers a great deal of flexibility that permits institutions to develop their own thoughtful arrangements. However, this freedom and flexibility comes with a reciprocal obligation on institutions to establish arrangements that are in the public interest rather than ‘just’ complying with a detailed rule. We have traded inflexibility for uncertainty; what comes next is up to all of us.

Click here to read about the AHRECS Australian Code 2018 services

The Contributors
Gary Allen, Mark Israel and Colin Thomson – senior consultants AHRECS

This post may be cited as:
Allen G., Israel M. and Thomson C. (21 June 2018) Australian Code 2018: What institutions should do next. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/australian-code-2018-what-institutions-should-do-next

We invite debate on issues raised by items we publish. However, we will only publish debate about the issues that the items raise and expect that all contributors model ethical and respectful practice.

The inclusion of retracted trials in systematic reviews: implications for patients’ safety0

 

After a paper has been through peer review and has been published it is the obligation of the scientific community to scrutinise an author’s work. If a serious error or misconduct is spotted the paper should be retracted and the work is removed from the evidence base. Over the past ten years there has been an exponential growth in the number of retracted papers. Much of the increase may be explained by the use of technology that has made it easier to spot duplicate publications, or fabricated data, for example. Once a paper is retracted researchers should not cite this work in future publications; this is, however, not the case. Many papers continue to be cited long after they have been retracted. Retraction Watch has a list of the ten most highly cited retracted papers. The paper that currently holds the number one spot has been cited a total of 942 times, after retraction. It is plausible that researchers are using retracted work to justify further study. This may be the scientific equivalent of “fruit of the poisonous tree”. That is to say, if the research is based on tainted work then that work is itself tainted. Authors may also include retracted work in systematic reviews and meta-analyses. In clinical disciplines – such as nursing or medicine – this is particularly worrisome.

Clinical practice should be based on the best available evidence, i.e. from systematic reviews. If a review were to include a retracted paper then the resulting meta-analysis would be contaminated and recommendations for practice emerging from the study would be unsound; ipso facto putting patients at risk because a clinician is using evidence that is flawed. To date we have found five examples in the nursing literature where this has happened. We have written to the journal editors to advise then of the error that authors have made. In our minds this is a cut and dry issue. The author has clearly made an error, potentially a serious error and one that will need to be resolved. Either the editor will need to issue an erratum or potentially retract the review (and there are examples in the literature where this has happened).

There is a second way in which a systematic review may include research that is retracted. This is when the authors of the review cite a paper that is retracted after the review is published. A more nuanced debate is perhaps required given that the review author has not made a mistake. Would it not be punitive to the author – potentially damaging their career prospects – to retract a review when they have not made a mistake? However, the inclusion of a paper that has subsequently been retracted has the potential to impact effect sizes in meta-analysis and/or review conclusions. My group undertook a study to explore how often retracted clinical trials were included in systematic reviews. The answer; more common than you might think. We followed up the citations of eleven retracted nursing trials and determined that they were included in 23 systematic reviews. Currently there is no mechanism that will alert authors (or publishing editors) that their systematic review includes a study that has subsequently been retracted. We suspect, but don’t know for certain, that in medicine and the allied health professions there are many more systematic reviews that include retracted studies. Clinical practice guidelines, such as those produced by the National Institute of Health and Care Excellence (NICE) rely on evidence from systematic reviews. And this is where our observation flips from being an interesting intellectual exercise to one that may impact patient safety. Could it be that patients are being exposed to ineffective treatments because guidelines are based on flawed reviews?

Journal editors, reviewers and researchers need to be aware and mindful that systematic reviews may contain citations that have been retracted. There is a compelling argument that the editor who issues a retraction notice for a paper also has a duty to alert authors citing this work of the retraction decision. Part of the peer review process should be checking that included references (particularly those included in meta-analysis) are not retracted, it might also be argued. Finally, not only do review authors need to ensure that they have not cited retracted papers, but they also have a responsibility to periodically check (something the Cochrane collaboration encourage authors to do) the status of included studies.

The inclusion of retracted trials is a threat to the integrity of systematic reviews. Consideration needs to be given to how the scientific community responds to the issue with the ultimate goal of keeping patients safe.

Professor Richard Gray is the editor of the Journal of Psychiatric and Mental Health Nursing. No other conflict of interest declared.

Contributor
Richard Gray PhD
Professor of Clinical Nursing Practice, La Trobe University, Melbourne, Australia
Richard’s University profile |  r.gray@latrobe.edu.au

This post may be cited as:
Gray R. (26 May 2018) The inclusion of retracted trials in systematic reviews: implications for patients’ safety. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/the-inclusion-of-retracted-trials-in-systematic-reviews-implications-for-patients-safety

Stop centring Western academic ethics: deidentification in social science research – Anna Denejkina0

 

This blog will provide a discussion of issues present in deidentifying marginalised research participants, or research participants who request to be identified, in the publication of qualitative research. As my research is mixed-method (quantitative and multi-method qualitative) it included several data collection techniques and analyses. For this discussion, I will specifically focus on the face-to-face and Skype interviews I conducted with participants in Russia and the United States.

My PhD study investigates intergenerational transmission of combat-related trauma from parent to child, focusing on the Soviet–Afghan war, 1979–89. This research includes interviews with Soviet veterans and family members of veterans; it was these interviews that raised questions of participant erasure and agency. From 12 face-to-face and Skype interview participants, one participant requested complete deidentification; one requested that their real name not be used but their location and other identifying details remain; two participants requested that only their first names be used and their location and other identifying details remain; the eight remaining participants requested that they be fully identified, with some participants sending me photographs of them and their families for inclusion in research publications. Given the social and political sensitivity that persists in Eastern Europe around the discussion of the Soviet invasion into Afghanistan, I had to consider and discuss with participants that requested they be identified the issue of their safety.

My research participants are marginalized participants by virtue of the topic of my research, the Soviet–Afghan war, and the ongoing silencing treatment they’ve received during and following the war by the state:

To take just two examples: in the hope of obscuring the true impact of the war, some local authorities refused to allow special areas in cemeteries to be set apart for the graves of soldiers killed in Afghanistan; while others forbade the cause and place of death to be stated on gravestones or memorial shields. (Aleksievich, Whitby & Whitby 1992, p.5–6)

Given academic broad-stroke standards of deidentifying research participants, we must review the ethics of this practice as it can promote and perpetuate erasure of marginalised participants and the silencing of their voices. Some textbooks on the topic of ethics in the social sciences approach anonymity and deidentification of participants from the angle that anonymity is part of the basic expectations of a research participant, without elaborating that anonymity is not always desirable nor ethical (see for example Ransome 2013), essentially replicating the medical model of human research ethics developed for the regulation of biomedical research in the United States (Dingwall 2016, p.25). Such an approach does not address the issues of presenting anonymity as a status-quo in social research, and makes a sweeping – and a Western academic – generalisation that anonymity is one of the vital assurances researchers must give to their participants to keep within their duty of care (that is, that researchers have at least some obligation to care for their research participants).

This approach to research ethics negates participant agency, particularly those participants that request they be identified in research. Furthermore, forced anonymity can be an act of disrespecting participants (Mattingly 2005, p.455–456) who may have already experienced invisibility and who are then further erased through anonymity by researchers (Scarth & Schafer 2016, p.86); for example, “in some Australian and, in particular, some Indigenous cultures, failing to name sources is both a mark of disrespect and a sign of poor research practice” (Israel, Allen & Thomson 2016, p.296).

As researchers, we must also question if presenting this approach as a vital tenet of social research can become a damaging rule-of-thumb for new researchers who might, therefore, not question the potential undermining of participant agency, and use deidentification unethically as a sweeping regulation within their research without consideration for the individual situations of their research participants. This is part of the issue created by applying a medical model of ethics assessment processes to the social sciences, in which the prevailing interpretation is that deidentification is also required within social research, whereas the reality is that specific agreements between the researcher and the research participant must be honoured.

The ethical dilemma, therefore is: can researchers ethically deidentify participants at the expense of the participants’ agency, potentially perpetuating the historical and symbolic erasure of their voices and experiences? I argue that, based on research design and data collection methods, this decision-making process is an ‘ethics in practice’ and must be approached in context, individually for each study, and for each individual participant.
As scholars, we want to minimise or eradicate harm that might come to our participants through our research. While we think “in advance about how to protect those who are brought into the study” (Tolich 2016, p.30) this must be a continual process throughout our project, in which we “work out the meaning of what constitutes ethical research and human rights in a particular context” (Breckenridge, James & Jops 2016, p.169; also see Ntseane 2009). This is important to note, because protection does not only refer to participants but also to others connected to them. For example, the use of a real name at the request of a participant may expose their family member(s) who were not part of the research.

Consequentialist approaches to ethics suggest that “an action can be considered morally right or obligatory if it will produce the greater possible balance of good over evil” (Israel, 2015: 10; also see Reynolds, 1979). This is an approach we could take to issues around deidentification; however, this also means that researchers must know what is good or bad. In studies like mine, this would mean knowing (or making an attempt, or an assumption to know) what is good or bad for my research participants. This action is infantilising, and places the researcher above the research participant by making the final call ourselves, which is to remove participant agency – if we can assume participants are autonomous during the research consent process, we must also assume that they are autonomous in making decisions with respect to their identification (Said 2016, p.212). Additionally, this action may be culturally insensitive given that Western human research ethics committees follow Western cultural guidelines, centring the dominance of Western academia.

The ethical issues I faced during my PhD research highlight why researchers cannot take a sweeping approach to deidentification in qualitative research – not even for a single study. ‘Ethics in practice’ means that each participant’s situation is analysed individually, and issues around erasure, safety, and their agency weighed against each other to reach a conclusion. I propose that if this conclusion is at odds with the preference of the participant, that it must then be taken back to the participant for further discussion. Not implementing this aspect of ‘ethics in practice’ goes against social science ethics, that we must avoid doing long-term and systemic harm, both of which come through erasure and silencing. We must also remember that “any research project has the potential to further disenfranchise vulnerable groups” (Breckenridge, James & Jops 2016, p.169), and ignoring the wishes of participants regarding their identification due to a Western model of ethics can cause further damage to these groups.

References:
Aleksievich, S., Whitby, J. & Whitby, R. 1992, Zinky Boys: Soviet voices from a forgotten war, Chatto & Windus, London.

Breckenridge, J., James, K. & Jops, P. 2016, ‘Rights, relationship and reciprocity: Ethical research practice with refugee women from Burma and New Delhi, India’, in K. Nakray, M. Alston & K. Whittenbury (eds), Social Sciences Research Ethics for a Globalizing World: Interdisciplinary and Cross-Cultural Perspectives, Routledge, New York, pp. 167–186.

Dingwall, R. 2016, ‘The social costs of ethics regulation’, in W.C. van den Hoonaard & A. Hamilton (eds),The Ethics Rupture, University of Toronto Press, Toronto, pp. 25–42.

Israel, M., Allen, G. & Thomson, C. 2016, ‘Australian research ethics governance: Plotting the demise of the adversarial culture’, in W.C. van der Hoonaard & A. Hamilton (eds),The Ethics Rupture, University of Toronto Press, Toronto, pp. 285–216.

Mattingly, C. 2005, ‘Toward a vulnerable ethics of research practice’, Health: An Inderdisciplinary Journal for the Social Study of Health, Illness and Medicine, vol. 9, no. 4, pp. 453–471.

Ntseane, P.G. 2009, ‘The ethics of the researcher-subject relationship: Experiences from the field’, in D.M. Mertens & P.E. Ginsberg (eds), The Handbook of Social Research Ethics, 1st edn, Sage, Thousand Oaks, pp. 295–307.
Ransome, P. 2013, ‘Social research and professional codes of ethics’, Ethics and Values in Social Research, Palgrave Macmillan, Basingstoke, pp. 24–53.

Said, D.G. 2016, ‘Transforming the lens of vulnerability: Human agency as an ethical consideration in research with refugees’, in K. Nakray, M. Alston & K. Whittenbury (eds),Social Sciences Research Ethics for a Globalizing World: Interdisciplinary and Cross-Cultural Perspectives, Routledge, New York, pp. 208–222.

Scarth, B. & Schafer, C. 2016, ‘Resilient Vulnerabilities: Bereaved Persons Discuss Their Experience of Participating in Thanatology Research’, in M. Tolich (ed.), Qualitative Ethics in Practice, Left Coast Press, Walnut Creek, CA, pp. 85–98.

‘Tolich, M. 2016, ‘Contemporary Ethical Dilemmas in Qualitative Research’, in M. Tolich (ed.), Qualitative Ethics in Practice, Left Coast Press, Walnut Creek, CA, pp. 25–32.

Statement of interest
No interests to declare.

Contributor
Anna Denejkina | Casual Academic and PhD  candidate in the Faculty of Arts and Social Sciences, researching intergenerational trauma transmission UTS | Staff profileAnna.Denejkina@uts.edu.au

This post may be cited as:
Denejkina A. (24 May 2018) Stop centring Western academic ethics: deidentification in social science research. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/stop-centring-western-academic-ethics-deidentification-in-social-science-research-anna-denejkina

Can Your HREC Benefit from Coaching?0

 

Atul Gawande, an American surgeon and researcher, sparked a debate in the medical community seven years ago with his New Yorker article Personal Best, in which he explored the benefits of coaching. The best athletes in the world, he reasoned, rather than sitting on their hard-earned laurels, employ coaches as a matter of course, to scrutinise and review their game, work on imperfections and amplify their strengths. He discovered that many elite musicians do, too. So why did other types of professionals not consider the advantages of coaching as an option for improving performance? Professionals, he concluded, are educated in a discipline, and then, their learning complete, sent out into the world to get on with it.

The situation and practice in New Zealand are quite different and warrant separate discussion. Martin Tolich and Barry Smith will write a separate post on this in a future edition.
Much the same, we at AHRECS have found, are many Human Research Ethics Committees. In Australia, members are engaged for their “relevant skills and/or expertise”, as required by para 5.1.28 of the National Statement for the Ethical Conduct of Human Research, but exactly what those are is not spelled out, and a lack of volunteers sometimes means institutions will settle for a person who merely falls within the membership criteria in para 5.1.30. While a wise recruiter of HREC members will raise questions about familiarity with ethical frameworks, and group decision-making dynamics, the National Statement does not mandate the possession of skills in either of these.
.
It is to be hoped that a new HREC, and new HREC members, will receive an induction that covers both. Para 5.1.29 of the National Statement, after all, requires both the induction and the ongoing education of members. However, the most recent available NHMRC annual Report on the Activity of Human Research Ethics Committees and Certified Institutions(for 2016) indicates that fully one-third (33%) of HRECs do not provide new members with induction, and that over a quarter (27%) had provided none of their members with any training at all over the past year. Of the 77% of HRECS which reported that “at least one” member had received training during the year, it is not reported whether morethan one member has, leaving open the possibility that the figures are considerably worse. And from my own experience, some HRECs do not even provide that training themselves, but rely on diligent members to source and pay for their own.
.
It is no wonder, therefore, that many HRECs struggle to reach a common understanding of the concepts in the National Statement, become bogged down in detail, or adopt a risk-averse adversarial culturethat stifles the progress of their institution’s research. They need a fresh set of eyes and an injection of new thinking that will not detract from their existing body of experience and expertise, but will challenge and build upon it.
.
Gawande’s article provides trenchant examples of the benefits he has experienced as a surgeon from introducing coaching to his practice – beginning, simply, with engaging another experienced surgeon to observe and comment on his surgeries. He recently revisited the issue in his December 2017 TED Talk “Want to get great at something? Get a coach”. In this presentation Gawande shows compelling instances of the improvements coaching has brought to teams of health practitioners, not only in terms of their expertise but in their group culture and strategic problem-solving skills.
.
There are a number of options for institutions seeking to support their HRECs through coaching.  Mainstream executive coaches may offer assistance in the areas of chairing and group decision-making.  More specialist research ethics expertise may be sourced from the HRECs of other institutions, or from previous Chairs.  The AHRECS team is committed to life-long learning and improvement, and we, too, offer practical, cost-effective coaching to facilitate real improvement for committees at any level of sophistication. Whatever solution you choose, significant value can be gained by having impartial, experienced experts observe your HREC in action, make practical and nuanced suggestions for improvement, and identify obstacles to best practice.  As a bonus, HRECs can also use coaching to meet the requirements of the National Statement for the continuing education of members.
.
The world’s best professionals recognise the advantages that coaching can bring. HRECs can, and should, seek those benefits too.
.

Contributor
Sarah Byrne, AHRECS senior consultant | Sarah’s AHRECS profilesarah.byrne@ahrecs.com

.
This post may be cited as:
Byrne S. (22 May 2018) Can Your HREC Benefit from Coaching?. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/can-your-hrec-benefit-from-coaching

Page 1 of 1012345...10...Last »