ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

Research Integrity

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Strategies for resolving ethically ambiguous scenarios2

 

During the fall of 2013 and spring of 2014, I traveled to numerous universities across the United States and England to conduct in-depth interviews with physicists as part of the Ethics among Scientists in International Context Study, a project led by my colleague Elaine Howard Ecklund at Rice University(1). The study sought to find out how physicists approach ethical issues related to research integrity in their day-to-day work.

My colleagues and I began our interviews with a relatively straightforward question: “What does it mean to you to be a responsible scientist in your role as a researcher?” For many scientists, responsibility in research is a relatively black and white question: don’t falsify, don’t fabricate, and don’t plagiarize. And if one looks to the literature, scholarship and policy also tend to focus on these black and white instances of misbehavior because they are unambiguous and deserving of stern sanctions.

As our research unfolded, Ecklund and I began to question whether a black and white view of misconduct is overly simplistic. From a sociological perspective, whether scientists reach consensus about the meaning of unethical conduct in science is debatable because the same behavior in a given circumstance may be open to different ethical interpretations based on the statuses of the stakeholders involved and the intended and actual outcomes of the behavior. Our research ultimately demonstrated that the line separating legitimate and illegitimate behavior in science tends to be gray, rather than black and white—a concept we refer to as ethical ambiguity.

For the purpose of illustration, consider a scenario in which a scientist receives funding for one project and then uses a portion of that money to support a graduate student on a study unrelated to the grant. Many scientists would view this practice as a black and white instance of unethical conduct. But some scientists we interviewed view this an ethically gray scenario, indicating that the use of funds for reasons other than specified in a grant is justifiable if it means supporting the careers of their students or keeping their lab afloat. In these and other circumstances, scientists cope with ambiguity through decisions that emphasize being good over the “right” way of doing things.

What strategies help resolve these and other ethically ambiguous scenarios?

Frameworks for ethical decision-making offer some, but in my view limited, help. Kantian deontological theories assert that one should follow a priori moral imperatives related to duty or obligation. A deontologist would argue, for example, that a scientist has an obligation to acknowledge the origins of her work. And policies regarding plagiarism have a law-like quality. But how far back in the literature should one cite prior work? Deontology does not help us much in this example. Another framework, consequentialism, would suggest that in an ethically ambiguous scenario, a scientist should select the action that has the best outcomes for the most people. But like other individuals, scientists are limited in their ability to weigh the outcomes of their actions (particularly as it relates to the long-term implications of scientific research).

One ethical decision-making framework, virtue ethics, does offer some help in resolving ambiguity. Virtue ethics recognizes that ethical decision-making requires consideration of circumstances, situational factors, and one’s motivations and reasons for choosing an action, not just the action itself. It poses the question, “what is the ethically good action a practically wise person would take in this circumstance?” For individual scientists, this may require consulting with senior and trusted colleagues to think through such circumstances is always a valuable practice.

A pre-emptive strategy for helping scientists resolve ethically ambiguous scenarios is to create cultures in which ambiguity can be recognized and discussed. For their part, the physicists we spoke with do not view ethics training as an effective way to create such a culture. As one physicist we spoke with explained, “It’s the easy thing to say, oh make a course on it. Taking a physics course doesn’t make me a good physicist. Taking a safety course doesn’t make me safe. Taking an ethics course doesn’t make me ethical.”

There may be merit to this physicist’s point. Nevertheless, junior scientists must learn—likely through the watching, talking, and teaching that accompanies research within a lab—that the ethical questions that scientists encounter are more likely to involve ambiguous scenarios where the appropriate action is unclear than scenarios related to fabrication, falsification, and plagiarism. __

Contributor
David R. Johnson, a sociologist, is an assistant professor of higher education the University of Nevada, Reno, in the United States. His first book, A Fractured Profession: Commercialism and Conflict in Academic Science, is published by Johns Hopkins University Press.
davidrjohnson@unr.edu

This post may be cited as:
Johnson D. (2017, 21 June) Strategies for resolving ethically ambiguous scenarios Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/strategies-resolving-ethically-ambiguous-scenarios

(1) (National Science Foundation grant # 1237737, Elaine Howard Ecklund PI, Kirstin RW Matthews and Steven Lewis, Co-PIs)

Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research?0

 

The Australian Code for the Responsible Conduct of Research is presently under review. Issued jointly in 2007 by the National Health and Medical Research Council, the Australian Research Council and Universities Australia, the current code is a 41-page document divided into two parts. Part A, comprising some 22 pages, sets out the responsibilities of institutions and researchers for conducting sponsored research responsibly. Part B, comprising approximately 11 pages, provides advice on procedures for identifying and investigating instances of the conduct of research in which those responsibilities have not been fulfilled.

The current proposal is to replace this document with a five-page statement of eight principles of responsible research conduct and two lists of responsibilities, again of institutions and researchers, together with a 25-page guidance document (the first of several) of preferred procedures for the identification and investigation of research conduct that has not adhered to the responsibilities set out in the five-page code.

Among the innovations in these changes, other than a significant reduction in the size of the document, is the proposal that the expression ‘research misconduct’ not be used in the guide on identification and investigation but be replaced by the expression ‘breach’. An important reason given for this proposal is the avoidance of conflict with the requirements of institutional enterprise bargaining agreements (EBAs).

The scale of the proposed changes is likely to generate extensive debate and this will have been disclosed in the course of the consultation period that closed earlier this year. The consultation process conformed to the minimal requirements of sections 12 and 13 of the NHMRC Act. This is a process that publicises, by formal means, drafts of proposed changes to which responses are sought. Current practice is to prefer provision of responses by electronic means and to require responders to answer questions determined by the Council. The passivity and formality of the process tends to attract and privilege better resourced interests. In some of the published debate that occurred during the consultation period, there was much attention to the change in scale and to the proposal not to refer to research misconduct but only to breach. This level of discussion risks ignoring several underlying systemic questions, or assuming the answers to them. It is the purpose of this brief opinion to tease out these questions.

The key premise of these remarks on the existing Code and any revision is that the Code constitutes a form of regulation of research conduct. With this premise comes a centrally important question: what are the aims of the regulation of this activity?

The apparent aims of the revision are the definition of responsible research conduct and relevant responsibilities, the identification, disclosure and investigation of failures to conduct research responsibly.

Underlying these aims lie some broader and deeper considerations. These include whether the purpose to be served by regulation of research is to:

  • protect the reputation of research;
  • prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research;
  • protect the reputation of research institutions; to prevent the waste – or even the risk of waste – of public research funds;
  • penalise those who fail to fulfil their research responsibilities, whether the failures are on the part of institutions or individual researchers;
  • protect the public interest in research by promoting productive use of public research funds, and rewarding responsible researchers and institutions.

It is a regulatory situation not unlike that which faced environmental protection through the 1990s and later and other areas such as oil rig and building safety in the UK. One lesson from these experiences is that where the aims of regulation are the protection of the environment, the safety of buildings or oil rigs, they are more likely to be achieved by giving those who conduct relevant activities the opportunity to devise their own methods to achieve those regulatory aims, methods that are then assessed by a responsible authority against a set of standards. The shift from the tight prescription of safety standards to some form of well-defined and supervised self-regulation appears to have been successful in achieving regulatory aims.

The choice of which of the above purposes is to be served will have a direct and profound effect on the methods to be used. For example, if the purpose were the protection of the reputation of research institutions, it would not be surprising to extend a significant degree of autonomy to institutions to set up their own procedures and methods for promoting responsible conduct and so establishing their good reputation. However, there would be an incentive for institutions not to publicly disclose instances of irresponsible research but to manage these institutionally. Reliance on the need to conform to enterprise bargaining agreements might lend support to justification of such non-disclosure.

If the purpose were to penalise those institutions or researchers who fail to fulfil relevant responsibilities for responsible research conduct, the system would need to define those responsibilities with some precision, so that the definitions could be made enforceable, and to establish an agency with appropriate investigation powers and sanctioning authority to identify, investigate and reach decisions as to whether relevant responsibilities had or had not been fulfilled.

A relevant regulatory model may not be that of criminal prosecution but rather of corruption investigation. There is a public interest that motivates the establishment and operation of anti-corruption agencies. The outcomes of their enquiries can lay the foundation for individual punishment of those found ‘guilty’ of corrupt behaviour, and those proceedings are then taken up by different state agencies. Research integrity policy can be seen to have similar aims: first, to protect the public interest by empowering an independent agency to uncover corrupt conduct, and, second, following such a finding, to prosecute individuals by a separate process. A research integrity agency could be given the task of investigating and finding research misconduct, leaving to employers of those individuals the responsibility to impose consequences. Although remaining autonomous in following their own procedures, and so conformingh to EBAs, institutions would be likely to find it difficult to conceal the process because of the public finding of research misconduct that they are implementing.

The debate so far appears to have left most of these underlying questions either unanswered or to have assumed answers to them. Because this has not been explicit, those answers are unlikely to be consistent. For example, the chosen terminology discloses some of these assumptions. The responsibilities described in the five-page code are in very general form that would present considerable difficulties if they were to be used to determine whether they had been fulfilled. For example, what evidence would constitute a failure on the part of an institution to fulfil the obligation to develop and maintain the currency and ready availability of a suite of policies and procedures which ensure that institutional practices are consistent with the principles and responsibilities of the Code? Or, what evidence would constitute a failure on the part of a researcher to fulfil the obligation to foster, promote and maintain an approach to the development, conduct and reporting of research based on honesty and integrity? The very breadth and generality of the language used in these statements suggest that the purpose is not their enforcement.

A further example is the proposal not to use the expression research misconduct in the document, but to refer to breaches of the Code. The language of breach is applied better to duties, rules or standards that are drafted with the intent of enforcement so that it can be clear when evidence discloses a breach and when it does not. Casting the substantive document in the form of responsibilities makes this difficult. In common language, responsibilities are either fulfilled or they are not and where they are not, it is common to speak of a failure to fulfil the responsibility rather than a breach. The use of the language betrays a confusion of underlying purposes.

The advocates of an enforcement approach have argued for a national research integrity agency, like that in some other Western nations. There may, however, be a simpler, more politically and fiscally feasible model available.

If the underlying purposes are to protect the reputation of research as a public interest, to prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research and to prevent waste or the risk of waste of public research funds, then the mode of regulation would be more likely to resource the training of researchers, the guidance of institutions in establishing appropriate research environments and the public promotion of responsible and effective research. The response to irresponsible research conduct would be directed at the withdrawal from the public arena of unsupported and inaccurate results, appropriate disclosure of these (e.g. to journal editors and research funding agencies) and appropriate apologies from responsible institutions and researchers supported with undertakings for reform of faulty procedures and practices.

In implementing these purposes, it would not be surprising for the system to give significant authority to both public research funding agencies. This could include, for instance, authority to ensure that institutions seeking access to their funds establish appropriate procedures to ensure responsible research conduct, including sufficient and sustained training of researchers, adequate resources and research facilities and appropriate auditing and reporting of research conduct. Agency authority could also include an entitlement to establish not only whether researchers who seek or have access to research funding have research records free of irresponsibility, but also that eligible institutions did not have current employees with such records.

Access to research funding has been a potent motivator in the institutional establishment of human research ethics committees, both in the United Kingdom, as Adam Hedgecoe (2009) has shown, and in Australia where the NHMRC’s 1985 decision required institutions to establish institutional ethics committees if they wanted access to research funds with which to conduct human research. In both cases, the decisions were followed by a notable increase in the number of institutional research ethics committees.

An approach that actively promotes responsible research practice may be more likely to achieve wider conformity with good practice standards than a focus on identifying, investigating and punishing failures to meet those standards. If so, the first better practice guide would be how to promote responsible conduct of research; it would not be how to identify investigate and respond to poor research conduct. Indeed, responsible institutions could pre-empt any such requirements by unilaterally setting up programs to instruct researchers in responsible conduct, train and embed research practice advisers in strategic research disciplines, reward examples of responsible research that enhance both researcher and institutional reputations and establish a reliable and comprehensive record keeping system of research. This is an argument that Allen and Israel (in press) make in relation to research ethics.

Australia has an opportunity to adopt a constructive and a nationally consistent approach to the active promotion of good research practice. It would be more likely to achieve this with a code that was not constrained by institutional self-interest nor confined by a punitive focus.

References

Allen, G and Israel, M (in press, 2017) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R and Tolich, M (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage.

Hedgecoe, A (2009) A Form of Practical Machinery, The Origins of Research Ethics Committees in the UK, 1967–1972, Medical History, 53: 331–350

Contributor
Prof Colin Thomson is one of the Senior Consultants at AHRECS. You can view his biography here and contact him at colin.thomson@ahrecs.com,

This post may be cited as:
Thomson C. (2017, 22 May) Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/cracking-code-revised-australian-code-likely-ensure-responsible-conduct-research

Review of the Australian Code for the Responsible Conduct of Research1

 

The Australian Code for the Responsible Conduct of Research 2007 (the Code) is Australia’s premier research standard. It was developed by the government agencies that fund the majority of research in Australia, namely the National Health and Medical Research Council (NHMRC) and the Australian Research Council, in collaboration with the peak body representing Australian universities (Universities Australia). The Code guides institutions and researchers in responsible research practices and promotes research integrity. The Code has broad relevance across all research disciplines.

The Code is currently under review.

A new approach for the Code has been proposed, informed by extensive consultation with the research sector and advice from expert committees. The Code has been streamlined into a principles-based document and will be supported by guides that provide advice about implementation, such as the first Guide to investigating and managing potential breaches of the Code.

NHMRC, ARC and UA recognise the importance of engaging with the Australian community, including research institutions, researchers, other funding bodies, academies and the public, to ensure the principles-based Code and supporting guides are relevant and practical. A public consultation strategy is an important part of any NHMRC recommendation or guideline development process.

As such, NHMRC on behalf of ARC and UA invites all interested persons to provide comments on the review. A webinar was held on 29 November 2016 to explain the new approach to the Code. You are invited to view this webinar (see link below) and can participate in the public consultation process by visiting the NHMRC Public Consultation website. Submissions close on 28 February 2017.

Further information on the review can be found here.

.
The contributor:

National Health and Medical Research Council (Australia) – Web | Email

This post may be cited as:
NHMRC (2017, 20 January) Review of the Australian Code for the Responsible Conduct of Research. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/research-integrity/review-australian-code-responsible-conduct-research

We don’t need a definition of research misconduct2

 

Responsibilities for ensuring the integrity of the research record rests with a number of players – funding agencies, governments, publishers, journal editors, institutions who conduct research and the researchers themselves. Our responsibilities for providing research that is honest and trustworthy are extant at the very beginning of a research project and ever present thereafter. If one of the players in the research ecosystem finds that research isn’t honest or can’t or shouldn’t be trusted then we have to take steps to remove it from the research record or stop it from getting there. We don’t need a definition of research misconduct in order to do that.

In fact, there isn’t a definition of research misconduct, and this is part of the problem. Resnik et.al. describe this in their 2015 paper that reviewed and categorized misconduct definitions from 22 out of the top 40 research and development funding countries. They claim that the variation in research misconduct definitions might make it harder for potential complainants to raise a concern because they can’t work out whether something might be misconduct in any particular jurisdiction. Similar research by Resnik et.al. also looked at research misconduct definitions in US universities, and found that the majority go beyond the definition provided in US law, perhaps indicating that these universities recognise that there is more than falsification, fabrication and plagiarism that can impact on the honesty and trustworthiness of the research record. A ‘back of the envelope’ review of Australian research misconduct policies paints a similar picture with two broad clades – one that centres on research misconduct as a serious deviation from accepted practice and the other that requires misrepresentation. All of this means that saying Professor Y committed research misconduct doesn’t really mean much, and doesn’t tell us how the research is dishonest or untrustworthy. It stops us from making our own assessment of the trustworthiness of the research.

Many definitions also require that it can be shown that the researcher responding to the allegation committed the act of research misconduct, however defined, deliberately or intentionally or with recklessness or negligence. This ‘mental fault’ element is used to distinguish those lapses in responsible research that are honest mistakes or accidental from deliberate, mischievous attempts to deceive the users of the research output, whether that is a journal article, lab meeting presentation or grant application. The inclusion of this mental fault element also focusses the attention of those considering complaints or serving on investigation panels on the minds of the ‘accused’ – the investigations very much become concentrated on whether Professor Y was really trying to be evil and not whether the research should be trusted and allowed to have impact.

We believe that this is the fundamental question a research integrity investigation should be considering – can we trust the research and would we be happy for it to have impact?

Consideration of mental fault (mens rea if you’re a lawyer) is important when considering what disciplinary action to take, but is best not part of the rubric when considering trustworthiness, accuracy or honesty of research.

Research conduct occurs on a spectrum – from excellent research conduct at one end to research misconduct at the other. It is not only those deliberate or grossly negligent acts that cause us to question the honesty or trustworthiness of research. There are a range of behaviours that impact on the integrity of research and many of these are neither deliberate nor FFP. Some of these are described in the seminal paper by Martinson et.al. that reports on results of a survey of biomedical researchers. The most frequent ‘questionable research practices’ described in this paper include inadequate record keeping related to research projects (27.5% of researchers), ‘dropping observations or data based on gut feeling’ (15.3%) and ‘using inadequate or inappropriate research designs’ (13.5%). It is clear that these three QRPs will impact on the trustworthiness and accuracy of research findings, and the incidence of these QRPs is much greater than the 0.3% reported for ‘falsifying or cooking research data’. These and other QRPs fall outside of many definitions of research misconduct, and so can be overlooked by institutions forced or who choose to focus on research misconduct as defined. This leaves a broad range of activities potentially unchecked, and research on the record that perhaps really shouldn’t be.

Removing the definition of research misconduct simplifies the landscape. Investigations won’t need to consider the motivation for a departure from accepted practice or breach, but only if the research can be trusted or should be allowed to have impact. Disciplinary action can still happen through other misconduct related processes and this is where deliberation and intent can and should be considered. A system like this already exists. The Canadian Tri-agency Framework for Responsible Conduct of Research does not define research misconduct but instead sets out very clearly articulated principles for research integrity. A breach of these principles can trigger an investigation and consideration of deliberation or intent is not part of the framework. The absence of a definition has not stopped Canadian funding agencies taking appropriate action. Recently, the first disclosure of an investigation was made. It names the researcher responsible and provides detail about the nature of the breach and the action taken by the funding agency involved.

Research misconduct is not a well-defined term, but a better definition is not needed and is not the solution. We need to take action to protect the integrity of the research record and stop untrustworthy or dishonest research from reaching it. We can do that just as well or even better without narrowing the scope of these considerations.

References

David B. Resnik J.D.,Ph.D., Lisa M. Rasmussen Ph.D. & Grace E. Kissling Ph.D. (2015) An International Study of Research Misconduct Policies, Accountability in Research, 22:5, 249-266, DOI: 10.1080/08989621.2014.958218

David B. Resnik J.D., Ph.D., Talicia Neal M.A., Austin Raymond B.A. & Grace E. Kissling Ph.D. (2015) Research Misconduct Definitions Adopted by U.S. Research Institutions, Accountability in Research, 22:1, 14-21, DOI: 10.1080/08989621.2014.891943

Nature 435, 737-738 (9 June 2005) | doi:10.1038/435737a

Contributors
Paul M Taylor, RMIT University (bio) – paul.taylor@rmit.edu.au
Daniel P Barr, University of Melbourne (bio)- dpbarr@unimelb.edu.au

This post may be cited as:
Taylor P and Barr DP. (2016, 25 October) We don’t need a definition of research misconduct. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/dont-need-definition-research-misconduct

0