ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyISSN 2206-2483

Institutional Responsibilities

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

How can we get mentors and trainees talking about ethical challenges?0


When it comes to research integrity, the international community often tends to focus on the incidence of research misconduct and the presumption that the remedy is to have more training in responsible conduct of research. Unfortunately, published evidence largely argues that these perceptions are demonstrably wrong. Specifically, formal training in courses and workshops is much less likely to be a factor in researcher behavior than what is observed and learned in the context of the research environment (Whitbeck, 2001; Faden et al., 2002; Kalichman, 2014).

These research findings should not be surprising. Most of an academic or research career is defined by actually conducting research and working with research colleagues. The idea that a single course or workshop will somehow insulate a researcher from unethical or questionable behavior, or arm them with the skills to deal with such behavior, would seem to be a hard case to make. That isn’t to say that there is no value in such training, but the possible impact is likely far less than what is conveyed by the research experience itself. With that in mind, the question is how, if at all, can research mentors be encouraged to integrate ethical discussions and reflections into the context of the day-to-day research experience?

With this as a challenge, we have been testing several approaches at UC San Diego in California to move conversations about RCR out of the classroom and into the research environment. With support from the US National Science Foundation, this project began with a 3-day conference comprised of ~20 leaders in the field of research integrity (Plemmons and Kalichman, 2017). Our goal was to develop a curriculum for a workshop in which participating faculty would acquire tools and resources to incorporate RCR conversations into the fabric of the research environment. Based on consensus from the conference participants, a curriculum was drafted, refined with input from experts and potential users, and finalized for pilot testing. Following two successful workshops for faculty at UC San Diego, the curriculum was rolled out for further testing nationally with interested faculty.

The focus of the workshop curriculum was five strategies participating faculty might use with members of their research groups. These included discussions revolving around (1) a relevant professional code of conduct, (2) creation of a checklist of things to be covered at specified times with all trainees, (3) real or fictional research cases defined by ethical challenges, (4) creation of individual development plans defining roles and responsibilities of the mentor and trainees, and (5) developing a group policy regarding definitions, roles, and responsibilities with respect to some dimension of practice particularly relevant to the research group. In all cases, the goal is to create opportunities that will make conversations about the responsible conduct of research an intentional part of the normal research environment.

The results of this project were encouraging, but still leave much to be done (Kalichman and Plemmons, 2017). Workshops were provided for over 90 faculty, who were strongly complimentary of the program and the approach. In surveys of the faculty and their trainees after the workshops, there were high levels of agreement that the five proposed strategies were feasible, relevant, and effective. However, while use of all five strategies was high post-workshop, we surprisingly found that trainees reported high levels of use pre-workshop as well. In retrospect, this should have been expected. Since workshops were voluntary, it is likely that faculty who attended were largely those already positively disposed to discussing responsible conduct with their trainees. One question worth asking is whether repeating workshops for interested faculty only will have a cascading effect over time, drawing in increasing numbers of faculty and serving to shift the culture. Also, it remains to be tested whether these workshops would be useful if faculty were required to attend.

For those interested in implementing these workshops in their own institutions, the curriculum, template examples and an instructor’s guide are all available on the Resources for Research Ethics Education website at:


Faden RR, Klag MJ, Kass NE, Krag SS (2002): On the Importance of Research Ethics and Mentoring. American Journal of Bioethics 4(2): 50-51.

Kalichman M (2014): A Modest Proposal to Move RCR Education Out of the Classroom and into Research. J Microbiol Biol Educ. 15(2):93–95.

Kalichman MW, Plemmons DK (2017): Intervention to Promote Responsible Conduct of Research Mentoring. Science and Engineering Ethics. doi: 10.1007/s11948-017-9929-8. [Epub ahead of print]

Plemmons DK, Kalichman MW (2017): Mentoring for Responsible Research: The Creation of a Curriculum for Faculty to Teach RCR in the Research Environment. doi: 10.1007/s11948-017-9897-z. [Epub ahead of print]

Whitbeck C (2001): Group mentoring to foster the responsible conduct of research. Science and Engineering Ethics 7(4):541-58.

Michael Kalichman – Director, Research Ethics Program, UC San Diego | University

Dena Plemmons | University of California, Riverside | University page

This post may be cited as:
Kalichman M. and Plemmons D. (2017, 21 December 2017) How can we get mentors and trainees talking about ethical challenges? Research Ethics Monthly. Retrieved from:

Professional Development across the Term of an HREC Committee Member0


AHRECS has considerable experience working with universities, hospitals, research institutions, government and non-government organisations to care for and build the capacity of its HREC Committee members across the entire term of their appointment. We start with the needs of our clients and offer support from recruitment all the way through to running an exit interview.

Many HRECs have quite simple manual-based inductions; we help HRECs to create something more welcoming and interactive that takes members from first contact to the point where they can contribute effectively to a committee. There is a significant difference between delivering a single ‘training session’ and creating a suite of professional development activities over two to three years, that covers committee members’ terms, and that might include dedicated annual PD and Strategy sessions and incorporate ongoing PD into each HREC meeting.

We can:

  • help recruit expert external members to meet the needs of specific HRECs
  • create interactive and multi-media induction and orientation materials
  • introduce members to the broader literature on research ethics
  • create material and run professional development sessions tailored to the specialist roles of particular HRECs
  • evaluate the performance of the HREC and provide feedback to the HREC and its host institution
  • offer exit interviews to HREC members stepping down from their role, and then….
  • help recruit replacement members to HRECs

We have provided elements of such services in Australia, Canada, Mauritius, New Zealand, Taiwan, United Kingdom, United States and Vietnam for new and established, small and large institutions and consortia of research organisations.

Prof. Mark Israel, AHRECS senior consultant
AHRECS profile page

This post may be cited as:
Israel M. (2017, 22 June) Professional Development across the Term of an HREC Committee Member. Research Ethics Monthly. Retrieved from:

Strategies for resolving ethically ambiguous scenarios2


During the fall of 2013 and spring of 2014, I traveled to numerous universities across the United States and England to conduct in-depth interviews with physicists as part of the Ethics among Scientists in International Context Study, a project led by my colleague Elaine Howard Ecklund at Rice University(1). The study sought to find out how physicists approach ethical issues related to research integrity in their day-to-day work.

My colleagues and I began our interviews with a relatively straightforward question: “What does it mean to you to be a responsible scientist in your role as a researcher?” For many scientists, responsibility in research is a relatively black and white question: don’t falsify, don’t fabricate, and don’t plagiarize. And if one looks to the literature, scholarship and policy also tend to focus on these black and white instances of misbehavior because they are unambiguous and deserving of stern sanctions.

As our research unfolded, Ecklund and I began to question whether a black and white view of misconduct is overly simplistic. From a sociological perspective, whether scientists reach consensus about the meaning of unethical conduct in science is debatable because the same behavior in a given circumstance may be open to different ethical interpretations based on the statuses of the stakeholders involved and the intended and actual outcomes of the behavior. Our research ultimately demonstrated that the line separating legitimate and illegitimate behavior in science tends to be gray, rather than black and white—a concept we refer to as ethical ambiguity.

For the purpose of illustration, consider a scenario in which a scientist receives funding for one project and then uses a portion of that money to support a graduate student on a study unrelated to the grant. Many scientists would view this practice as a black and white instance of unethical conduct. But some scientists we interviewed view this an ethically gray scenario, indicating that the use of funds for reasons other than specified in a grant is justifiable if it means supporting the careers of their students or keeping their lab afloat. In these and other circumstances, scientists cope with ambiguity through decisions that emphasize being good over the “right” way of doing things.

What strategies help resolve these and other ethically ambiguous scenarios?

Frameworks for ethical decision-making offer some, but in my view limited, help. Kantian deontological theories assert that one should follow a priori moral imperatives related to duty or obligation. A deontologist would argue, for example, that a scientist has an obligation to acknowledge the origins of her work. And policies regarding plagiarism have a law-like quality. But how far back in the literature should one cite prior work? Deontology does not help us much in this example. Another framework, consequentialism, would suggest that in an ethically ambiguous scenario, a scientist should select the action that has the best outcomes for the most people. But like other individuals, scientists are limited in their ability to weigh the outcomes of their actions (particularly as it relates to the long-term implications of scientific research).

One ethical decision-making framework, virtue ethics, does offer some help in resolving ambiguity. Virtue ethics recognizes that ethical decision-making requires consideration of circumstances, situational factors, and one’s motivations and reasons for choosing an action, not just the action itself. It poses the question, “what is the ethically good action a practically wise person would take in this circumstance?” For individual scientists, this may require consulting with senior and trusted colleagues to think through such circumstances is always a valuable practice.

A pre-emptive strategy for helping scientists resolve ethically ambiguous scenarios is to create cultures in which ambiguity can be recognized and discussed. For their part, the physicists we spoke with do not view ethics training as an effective way to create such a culture. As one physicist we spoke with explained, “It’s the easy thing to say, oh make a course on it. Taking a physics course doesn’t make me a good physicist. Taking a safety course doesn’t make me safe. Taking an ethics course doesn’t make me ethical.”

There may be merit to this physicist’s point. Nevertheless, junior scientists must learn—likely through the watching, talking, and teaching that accompanies research within a lab—that the ethical questions that scientists encounter are more likely to involve ambiguous scenarios where the appropriate action is unclear than scenarios related to fabrication, falsification, and plagiarism. __

David R. Johnson, a sociologist, is an assistant professor of higher education the University of Nevada, Reno, in the United States. His first book, A Fractured Profession: Commercialism and Conflict in Academic Science, is published by Johns Hopkins University Press.

This post may be cited as:
Johnson D. (2017, 21 June) Strategies for resolving ethically ambiguous scenarios Research Ethics Monthly. Retrieved from:

(1) (National Science Foundation grant # 1237737, Elaine Howard Ecklund PI, Kirstin RW Matthews and Steven Lewis, Co-PIs)

Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research?0


The Australian Code for the Responsible Conduct of Research is presently under review. Issued jointly in 2007 by the National Health and Medical Research Council, the Australian Research Council and Universities Australia, the current code is a 41-page document divided into two parts. Part A, comprising some 22 pages, sets out the responsibilities of institutions and researchers for conducting sponsored research responsibly. Part B, comprising approximately 11 pages, provides advice on procedures for identifying and investigating instances of the conduct of research in which those responsibilities have not been fulfilled.

The current proposal is to replace this document with a five-page statement of eight principles of responsible research conduct and two lists of responsibilities, again of institutions and researchers, together with a 25-page guidance document (the first of several) of preferred procedures for the identification and investigation of research conduct that has not adhered to the responsibilities set out in the five-page code.

Among the innovations in these changes, other than a significant reduction in the size of the document, is the proposal that the expression ‘research misconduct’ not be used in the guide on identification and investigation but be replaced by the expression ‘breach’. An important reason given for this proposal is the avoidance of conflict with the requirements of institutional enterprise bargaining agreements (EBAs).

The scale of the proposed changes is likely to generate extensive debate and this will have been disclosed in the course of the consultation period that closed earlier this year. The consultation process conformed to the minimal requirements of sections 12 and 13 of the NHMRC Act. This is a process that publicises, by formal means, drafts of proposed changes to which responses are sought. Current practice is to prefer provision of responses by electronic means and to require responders to answer questions determined by the Council. The passivity and formality of the process tends to attract and privilege better resourced interests. In some of the published debate that occurred during the consultation period, there was much attention to the change in scale and to the proposal not to refer to research misconduct but only to breach. This level of discussion risks ignoring several underlying systemic questions, or assuming the answers to them. It is the purpose of this brief opinion to tease out these questions.

The key premise of these remarks on the existing Code and any revision is that the Code constitutes a form of regulation of research conduct. With this premise comes a centrally important question: what are the aims of the regulation of this activity?

The apparent aims of the revision are the definition of responsible research conduct and relevant responsibilities, the identification, disclosure and investigation of failures to conduct research responsibly.

Underlying these aims lie some broader and deeper considerations. These include whether the purpose to be served by regulation of research is to:

  • protect the reputation of research;
  • prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research;
  • protect the reputation of research institutions; to prevent the waste – or even the risk of waste – of public research funds;
  • penalise those who fail to fulfil their research responsibilities, whether the failures are on the part of institutions or individual researchers;
  • protect the public interest in research by promoting productive use of public research funds, and rewarding responsible researchers and institutions.

It is a regulatory situation not unlike that which faced environmental protection through the 1990s and later and other areas such as oil rig and building safety in the UK. One lesson from these experiences is that where the aims of regulation are the protection of the environment, the safety of buildings or oil rigs, they are more likely to be achieved by giving those who conduct relevant activities the opportunity to devise their own methods to achieve those regulatory aims, methods that are then assessed by a responsible authority against a set of standards. The shift from the tight prescription of safety standards to some form of well-defined and supervised self-regulation appears to have been successful in achieving regulatory aims.

The choice of which of the above purposes is to be served will have a direct and profound effect on the methods to be used. For example, if the purpose were the protection of the reputation of research institutions, it would not be surprising to extend a significant degree of autonomy to institutions to set up their own procedures and methods for promoting responsible conduct and so establishing their good reputation. However, there would be an incentive for institutions not to publicly disclose instances of irresponsible research but to manage these institutionally. Reliance on the need to conform to enterprise bargaining agreements might lend support to justification of such non-disclosure.

If the purpose were to penalise those institutions or researchers who fail to fulfil relevant responsibilities for responsible research conduct, the system would need to define those responsibilities with some precision, so that the definitions could be made enforceable, and to establish an agency with appropriate investigation powers and sanctioning authority to identify, investigate and reach decisions as to whether relevant responsibilities had or had not been fulfilled.

A relevant regulatory model may not be that of criminal prosecution but rather of corruption investigation. There is a public interest that motivates the establishment and operation of anti-corruption agencies. The outcomes of their enquiries can lay the foundation for individual punishment of those found ‘guilty’ of corrupt behaviour, and those proceedings are then taken up by different state agencies. Research integrity policy can be seen to have similar aims: first, to protect the public interest by empowering an independent agency to uncover corrupt conduct, and, second, following such a finding, to prosecute individuals by a separate process. A research integrity agency could be given the task of investigating and finding research misconduct, leaving to employers of those individuals the responsibility to impose consequences. Although remaining autonomous in following their own procedures, and so conformingh to EBAs, institutions would be likely to find it difficult to conceal the process because of the public finding of research misconduct that they are implementing.

The debate so far appears to have left most of these underlying questions either unanswered or to have assumed answers to them. Because this has not been explicit, those answers are unlikely to be consistent. For example, the chosen terminology discloses some of these assumptions. The responsibilities described in the five-page code are in very general form that would present considerable difficulties if they were to be used to determine whether they had been fulfilled. For example, what evidence would constitute a failure on the part of an institution to fulfil the obligation to develop and maintain the currency and ready availability of a suite of policies and procedures which ensure that institutional practices are consistent with the principles and responsibilities of the Code? Or, what evidence would constitute a failure on the part of a researcher to fulfil the obligation to foster, promote and maintain an approach to the development, conduct and reporting of research based on honesty and integrity? The very breadth and generality of the language used in these statements suggest that the purpose is not their enforcement.

A further example is the proposal not to use the expression research misconduct in the document, but to refer to breaches of the Code. The language of breach is applied better to duties, rules or standards that are drafted with the intent of enforcement so that it can be clear when evidence discloses a breach and when it does not. Casting the substantive document in the form of responsibilities makes this difficult. In common language, responsibilities are either fulfilled or they are not and where they are not, it is common to speak of a failure to fulfil the responsibility rather than a breach. The use of the language betrays a confusion of underlying purposes.

The advocates of an enforcement approach have argued for a national research integrity agency, like that in some other Western nations. There may, however, be a simpler, more politically and fiscally feasible model available.

If the underlying purposes are to protect the reputation of research as a public interest, to prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research and to prevent waste or the risk of waste of public research funds, then the mode of regulation would be more likely to resource the training of researchers, the guidance of institutions in establishing appropriate research environments and the public promotion of responsible and effective research. The response to irresponsible research conduct would be directed at the withdrawal from the public arena of unsupported and inaccurate results, appropriate disclosure of these (e.g. to journal editors and research funding agencies) and appropriate apologies from responsible institutions and researchers supported with undertakings for reform of faulty procedures and practices.

In implementing these purposes, it would not be surprising for the system to give significant authority to both public research funding agencies. This could include, for instance, authority to ensure that institutions seeking access to their funds establish appropriate procedures to ensure responsible research conduct, including sufficient and sustained training of researchers, adequate resources and research facilities and appropriate auditing and reporting of research conduct. Agency authority could also include an entitlement to establish not only whether researchers who seek or have access to research funding have research records free of irresponsibility, but also that eligible institutions did not have current employees with such records.

Access to research funding has been a potent motivator in the institutional establishment of human research ethics committees, both in the United Kingdom, as Adam Hedgecoe (2009) has shown, and in Australia where the NHMRC’s 1985 decision required institutions to establish institutional ethics committees if they wanted access to research funds with which to conduct human research. In both cases, the decisions were followed by a notable increase in the number of institutional research ethics committees.

An approach that actively promotes responsible research practice may be more likely to achieve wider conformity with good practice standards than a focus on identifying, investigating and punishing failures to meet those standards. If so, the first better practice guide would be how to promote responsible conduct of research; it would not be how to identify investigate and respond to poor research conduct. Indeed, responsible institutions could pre-empt any such requirements by unilaterally setting up programs to instruct researchers in responsible conduct, train and embed research practice advisers in strategic research disciplines, reward examples of responsible research that enhance both researcher and institutional reputations and establish a reliable and comprehensive record keeping system of research. This is an argument that Allen and Israel (in press) make in relation to research ethics.

Australia has an opportunity to adopt a constructive and a nationally consistent approach to the active promotion of good research practice. It would be more likely to achieve this with a code that was not constrained by institutional self-interest nor confined by a punitive focus.


Allen, G and Israel, M (in press, 2017) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R and Tolich, M (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage.

Hedgecoe, A (2009) A Form of Practical Machinery, The Origins of Research Ethics Committees in the UK, 1967–1972, Medical History, 53: 331–350

Prof Colin Thomson is one of the Senior Consultants at AHRECS. You can view his biography here and contact him at,

This post may be cited as:
Thomson C. (2017, 22 May) Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research? Research Ethics Monthly. Retrieved from:

Page 1 of 212