ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Evaluative practice

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The Ethics of Evaluation Research0

 

Evaluation research is used to assess the value of such things as services, interventions, and policies. The term ‘evaluation research’ makes it seem homogeneous but in fact evaluation research draws on a range of theoretical perspectives and a wide variety of quantitative and qualitative methods. However, there are three things evaluation research usually does that set it apart from other kinds of research. It:

  1. asks what is working well and where and how improvements could be made;
  2. involves stakeholders; and
  3. offers practical recommendations for action.

The American Evaluation Association (AEA), with members from over 60 countries, has five ‘guiding principles’ which ‘reflect the core values of the AEA’ (2018):

Systematic inquiry: evaluators conduct data-based inquiries that are thorough, methodical, and contextually relevant.

Competence: evaluators provide skilled professional services to stakeholders.

Integrity: evaluators behave with honesty and transparency in order to ensure the integrity of the evaluation.

Respect for people: evaluators honour the dignity, well-being, and self-worth of individuals and acknowledge the influence of culture within and across groups.

Common good and equity: evaluators strive to contribute to the common good and advancement of an equitable and just society.

The question of how research ethics review processes should engage with evaluation research has not yet been definitively decided in many research institutions in Australia and New Zealand. Helen Kara’s article alerts us to the degree to which evaluation researchers encounters novel ethical issues. We shall explore some of the possible institutional approaches in a forthcoming Patreon resource.

This is unusual in being thorough – there is much more explanation in the document – and up to date. The Australasian Evaluation Society (AES) has Guidelines for the Ethical Conduct of Evaluations which were last revised in 2013. This is a much more discursive document – 13 pages to the AEA’s four – which offers guidance to evaluation commissioners as well as evaluation researchers. The AES guidelines also refer to and include Indigenous ethical principles and priorities. In particular, reciprocity is highlighted as a specific principle to be followed. This is another difference from the AEA document in which Indigenous evaluation and evaluators are not mentioned.
.

The United Nations Evaluation Group also specifies evaluation principles in its ethical guidelines (2008) but they are 10 years older than the AEA’s. Beyond these, there are few codes of ethics, or equivalent, readily available from national and international evaluation bodies. Also, evaluation research rarely comes within the purview of human research ethics committees unless it’s being conducted within a university or a health service. And books on evaluation research rarely mention ethics.
.

Recent research has shown that a proportion of evaluation researchers will assert that ethics does not apply to evaluation and that they have never encountered ethical difficulties in their work (Morris, 2015, p.32; Williams, 2016, p.545). This seems very odd to me, as I have been doing evaluation research for the last 20 years and I have encountered ethical difficulties in every project. It also seems worrying as I wonder whether the next generation of evaluation researchers are learning to believe that they do not need to think about ethics.
.

In my recent book, Research Ethics in the Real World (2018), I demonstrated that ethical issues exist at all stages of the research process, from the initial idea for a research question up to and including aftercare. This applies to evaluation research just as much as it does to any other kind of research. I also demonstrated that there are some ethical considerations at the macro level for evaluation research, such as funding, stakeholder involvement, and publishing.
.

Well-funded organisations or projects can allocate money for evaluation; poorly-funded ones cannot. This means that evaluation research is routinely done where funding is available rather than where evaluation is most needed. In the United Kingdom, where I am based, we have been undergoing an ideological programme of austerity involving massive cuts to public services over the last nine years. This has come from successive governments that have also prioritised evaluation research, funding expensive national ‘What Works’ centres on themes such as ageing, health, and childhood, right through the austerity years. Yet to the best of my knowledge there has been no evaluation of the impact of any service closure. This seems short-sighted at best – though it does illustrate my point that evaluation happens where money is being spent. Also, an explicit purpose of evaluation research is often to provide evidence to use in future funding negotiations, which means that results are effectively expected to be positive. This means that pressures associated with funding can introduce bias into evaluation research right from the start. Combine this with an evaluator who needs to be paid for their work in order to pay their own bills, and you have a situation that is well on its way to being a money-fuelled toxic mess.
.

Involving stakeholders is a key principle of evaluation research. The AEA define ‘stakeholders’ as ‘individuals, groups, or organizations served by, or with a legitimate interest in, an evaluation including those who might be affected by an evaluation’ and suggest that evaluators should communicate with stakeholders about all aspects of the evaluation (2018). Again, here, the use of a single word implies homogeneity when in fact evaluation stakeholders may range from Government ministers to some of the most marginalised people in society. This can make involving them difficult: some will be too busy to be involved, some will be impossible to find, and some will not want to be involved. Which leaves evaluators caught between an impractical principle and an unprincipled practice. There is some good practice in stakeholder involvement (Cartland, Ruch-Ross and Mason, 2012:171-177), but there is also a great deal of tokenism which is not ethical (Kara, 2018:63). Also, even when all groups of stakeholders are effectively engaged, this can bring new ethical problems. For example, their values and interests may be in conflict which can be challenging to manage, particularly alongside the inevitable power imbalances. Even if stakeholders work well together such that power imbalances are reduced within the evaluation, it is unlikely those reductions will carry over into the wider world.
.

Commissioners of evaluation are reluctant to publish reports unless they are overwhelmingly positive. I had an example of this some years ago when I evaluated an innovative pilot project tackling substance misuse. From the start my client said they were keen to publish the evaluation report. I worked with stakeholders to collect and analyse my data and made around 10 recommendations, all but one of which said words to the effect of ‘good job, carry on’. Just one recommendation offered constructive criticism of one aspect of the project and made suggestions for improvement. My client asked me to remove that recommendation; I thought about it carefully but in the end refused because it was fully supported by the evaluation data. We had two more meetings about it and in the end, my client decided that they would not publish the report. This was unfortunate because others could have learned from the evaluation findings and methods, and because failure to publish increases the risk of work being duplicated which results in public funds being wasted. Sadly, as a commissioned researcher, I had signed away my intellectual property so it was out of my hands. Everyone involved in evaluation research can tell these kinds of tales. However, it is too simplistic to suggest that publication should always be a requirement. In some cases, the publication could be harmful, such as when a critical evaluation might lead to the economy of service closure, to the detriment of service users and staff, rather than to more resource-intensive improvements in policy and practice. But overall, unless there is a good reason to withhold a report, the publication is the ethical route.
.

As the AEA principles suggest, evaluation researchers are in a good position to help increase social justice by influencing evaluation stakeholders to become more ethical. I would argue that there are several compelling reasons, outlined above, why all evaluation researchers should learn to think and act ethically.
.

References

American Evaluation Association (2018) Guiding Principles. Washington, DC: American Evaluation Association.

Australasian Evaluation Society (2013) Guidelines for the Ethical Conduct of Evaluations. www.aes.asn.au

Cartland, J., Ruch-Ross, H. and Mason, M. (2012) Engaging community researchers in evaluation: looking at the experiences of community partners in school-based projects in the US. In Goodson, L. and Phillimore, J. (eds) Community Research for Participation: From Theory to Method, pp 169-184. Bristol, UK: Policy Press.

Kara, H. (2018) Research Ethics in the Real World: Euro-Western and Indigenous Perspectives. Bristol, UK: Policy Press.

Morris, M. (2015) Research on evaluation ethics: reflections and an agenda. In Brandon, P. (ed) Research on evaluation: new directions for evaluation, 31–42. Hoboken, NJ: Wiley.

United Nations Evaluation Group (2008) UNEG Ethical Guidelines for Evaluation. http://www.unevaluation.org/document/detail/102

Williams, L. (2016) Ethics in international development evaluation and research: what is the problem, why does it matter and what can we do about it? Journal of Development Effectiveness 8(4) 535–52. DOI: 10.1080/19439342.2016.1244700.
.

Recommended reading

Morris, M. (ed) (2008) Evaluation Ethics for Best Practice: Cases and Commentaries. New York, NY: The Guilford Press.

Donaldson, S. and Picciotto, R. (eds) (2016) Evaluation for an Equitable Society. Charlotte, NC: Information Age Publishing, Inc.

Contributor
Helen Kara, Director, We Research It Ltd | profilehelen@weresearchit.co.uk

This post may be cited as:
Kara, H. (26 January 2019) The Ethics of Evaluation Research. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-ethics-of-evaluation-research

When is research not research?0

 

Most institutions have processes for differentiating between Quality Assurance/Quality Improvement (QA/QI) activities and those that can be considered to be research. Unfortunately, much of the debate about which is which has been driven by regulatory needs, as a categorization of QA/QI leads to a project not requiring ethics committee review, a preference for many where the low risk pathway is still considered burdensome. Avoidance of ethics review for bureaucratic reasons though is a less than satisfactory motive.

In large scale genomics projects a vast amount of the work being done is in the enabling technologies, that is, the sequencing itself as well as the computational methodologies that are at the heart of the bioinformatics that makes sense of the vast quantities of raw data generated. To develop robust and reliable informatics approaches one can run simulations but ultimately they must be done on real data to ensure they are fit for purpose. The question arises then, is using the data generated from a person’s cancer as well as their normal DNA sequence for the purposes of establishing valid computational tools research? On this topic Joly et al (EJHG 2016) provide a perspective with regard to the International Cancer Genome Consortium (ICGC), which has sequenced more than 10000 patient’s cancers from across 17 jurisdictions. The authors of the paper, of which I am one, are members of the ICGC Ethics and Policy Committee (EPC), which provides advice to ICGC member jurisdictions on matters of ethics relating to the program.

Using two activities, both of which are effectively a means to benchmark how variants and mutations are identified in the genome, we explored how a variety of international jurisdictions viewed the activity and whether they were helpful in defining whether it was a QA/QI activity or one that was more properly regarded as research. Both were identified as having potential risks to confidentiality and both wished to publish their findings. For these reasons they ended up being called ‘research’ and underwent appropriate review. However, recognizing that this may create hurdles for such work that are disproportionate to the true risk of the activity, we reviewed jurisdictional approaches to this topic as well as the literature to see if a more helpful framework could be established to guide appropriate review.

The exercise proved particularly useful as it shone a critical light on some of the more widely used criteria, such as generalizability, which whilst being used by many organizations and jurisdictions as a key distinction between research and QA/QI is in fact a flawed criterion if not used carefully. In contrast, risk to a participant stands up as an important factor that must be evaluated in all activities. Four other criteria (novelty of comparison, speed of implementation, methodology, and scope of involvement), were also reviewed for their utility in developing a useful algorithm for triaging an appropriate review pathway.

The paper proposes that a two step process be implemented in which the six identified criteria are first used to determine whether a project is more QA/QI, research or has elements of both, followed by a risk-based assessment process to determine which review pathways is used. Expedited review, or exemption from review, are options for very low risk projects but, as the paper highlighted from a review of the pathways in four ICGC member countries (UK, USA, Canada and Australia), there is no consensus on how to apply this. The challenge therefore remains establishing more uniformity between jurisdictions on the policies that apply to risk-based evaluation of research. Nevertheless, simple categorization into QA/QI vs Research is not particularly useful and a greater emphasis on evaluation based on criteria that define risk of harm to participants is the way forward.

Further reading

Joly Y, So D, Osien G, Crimi L, Bobrow M, Chalmers D, Wallace S E, Zeps N and Knoppers B (2016) A decision tool to guide the ethics review of a challenging breed of emerging genomic projects. European Journal of Human Genetics advance. Online publication. doi:10.1038/ejhg.2015.279
Publisher: http://www.nature.com/ejhg/journal/vaop/ncurrent/full/ejhg2015279a.html
ResearchGate: https://www.researchgate.net/publication/291341753_A_decision_tool_to_guide…

NHMRC (2014) Ethical Considerations in Quality Assurance and Evaluation Activities http://www.nhmrc.gov.au/guidelines-publications/e111

Contributor
Dr. Nik Zeps
Dr. Zeps is Director of Research at St John of God Subiaco, Murdoch and Midland Hospitals. He was a member of the Australian Health Ethics Committee from 2006-2012 and the Research Committee of the National Health and Medical Research Council (NHMRC) from 2009-2015. He is a board member of the Australian Clinical Trials Alliance and co-chair of the international Cancer Genome Consortium communication committee. His objective as Director of Research is to integrate clinical research and teaching into routine healthcare delivery to improve the lives of patients and their families.
Nikolajs.Zeps@sjog.org.au

This post may be cited as: Zeps N. (2016, 30 June) When is research not research?. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/human-research-ethics/research-not-research

0