ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Resourcing practice

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Institutional approaches to evaluative practice0

 

In 2001, the NHMRC published its policy document When does Quality Assurance in Health Care Require Independent Ethical Review? The document was rescinded in 2007 and is no longer available since the update to the NHMRC website in 2018. Several changes led to the rescinding of the 2001 policy document:

    1. The release of the 2007 edition of the National Statement provided a mechanism for exempting work with de-identified data where the work involved no more than negligible risk.
    2. The 2007 edition of the National Statement established clear criteria for determining whether research could be reviewed outside of an HREC (e.g. a project cannot involve any greater than a low risk of harm and cannot involve matters the National Statement specifies as requiring HREC review). [1]
  1. The ‘pressure to publish’ has meant a significant amount of work that used to be conducted as an evaluation or for quality assurance is now being submitted for publication to refereed journals.
  2. Figure 1 – A version of this image, which is not watermarked, is available from https://www.patreon.com/ahrecs with a USD3/month subscription.

    Stakeholders and funders require services and expenditure to be based on robust evidence and analysis.

As a result,the distinctions between research, evaluative practice and quality assurance have become blurred to the point of no longer being helpful and the research ethics review mechanisms for exemption, proportional review and mandated HREC review in specified circumstances, might be sufficient for the appropriate handling of evaluative practice.

Nevertheless, submissions to the NHMRC’s Australian Health Ethics Committee prompted the release in 2014 of Ethical Considerations in Quality Assurance and Evaluation Activities.

Ethical Considerations in Quality Assurance and Evaluation Activities describes whether quality assurance and evaluation work requires research ethics review and the most appropriate way to approach that review. It:

  1. discusses how such activities can be conducted over a spectrum of work, which may change over time and the divide between evaluative practice and human research can be porous,
  2. concedes HREC review is often not helpful when the primary purpose of the activity is to inform and improve an organisation’s practice (rather than contribute to the wider body of knowledge),
  3. describes four matters [2] to which the design and conduct of evaluative practice must adhere,
  4. describes four criteria to identify where oversight but not review is required, [3]
  5. directs institutions to establish policies with regard to these matters, [4]  and
  6. at provision (e) describes circumstances where consideration of the need for review is required [5] and, where it is, offers guidance at provision (f) to appropriate levels of review.

The 2014 policy document therefore provides criteria to determine whether an institution’s evaluative practice/quality assurance activity requires:

  1. only administrative consideration [6] to confirm that the institution’s policies relating to the use of those data to assess its services are being/have been adhered to in the design of the work,
  2. a special research review process [7] within the organisation to test whether HREC review is required and to confirm the institution’s policies relating to evaluative practice have been adhered to in the design of the work, or
  3. ethics review by an HREC or another review body.

Given many staff will want to publish the outcome of evaluations and there will be academic interest in matters related to evaluations/quality assurance, the institution’s arrangements must not preclude academic publication.

The following approach is recommended:

  1. The institution needs to know what evaluative practice is being conducted and so must have a mandatory process for review that is similar to a scientific research review committee, but is also an institutional policy process (its review feedback should provide advice to facilitate the planned activity and relate to how to conduct the work ethically and successfully).
  2. There is a mechanism for the research review of evaluative practice/quality assurance, with some such reviews being delegated to a special panel or administrative review.
  3. The review pathway being initially determined by the responses to a sequence of yes/no questions.
  4. The responses to the questions being reviewed by an ethics officer(in the case of clinical audits this the person conducts the initial assessment will need relevant clinical expertise) who confirms the review pathway.
  5. Every 12 months a small proportion of evaluative projects are randomly selected for audit, not to revisit the decision but to confirm the process is working correctly.
  6. Policy and guidance material informs the ethical design, conduct, reporting and publication of evaluative practice/quality assurance, as well as its ethics review.
  7. There are briefing sheets (two double-sided A4 pages)
  8. For researchers: summarising the institution’s arrangements for evaluative practice/quality assurance (including a summary of their responsibilities).
  9. For heads of school/department: discussing their role in the review of services/procedures/teaching and learning in their area.
  10. For editors/publishers: explaining the institution’s arrangements, for provision by researchers if they are asked to provide a copy of the HREC approval.
  11. If the institution has a network of collegiate Research Ethics Advisers (see our earlier blog post about REAs) some advisers should be experienced in the conduct of evaluative practice.

Available for USD10/month to subscribers, AHRECS has created notes to inform the in-house development of A, B and C. A USD15/month subscription provides access to our growing library of materials.

AHRECS would be delighted to discuss an arrangement where we provide feedback on the materials you produce in-house or us producing the materials for you.

Email us at Patron@ahrecs.com with any question about our Patreon page or becoming a patron, or Evaluative@ahrecs.com to discuss how we could assist you with regard to the ethics of evaluative practice.

[1] Because of design factors such as the deception of participants, and participant factors such as research with Aboriginal and Torres Strait Islander people.

[2] What really matters is that:

  • participants in QA/evaluation are afforded appropriate protections and respect
  • QA and/or evaluation is undertaken to generate outcomes that are used to assess and/or improve service provision
  • those who undertake QA and/or evaluation adhere to relevant ethical principles and State, Territory and Commonwealth legislation
  • organisations provide guidance and oversight to ensure activities are conducted ethically including a pathway to address concerns.

[3] In many situations, oversight of the activity is required, but an ethics review is not necessary. These include situations where:

  • The data being collected and analysed is coincidental to standard operating procedures with standard equipment and/or protocols
  • The data is being collected and analysed expressly for the purpose of maintaining standards or identifying areas for improvement in the environment from which the data was obtained;
  • The data being collected and analysed is not linked to individuals; and
  • None of the triggers for consideration of ethics review (listed below) are present.

[4]  ‘Organisations should develop policies on QA/evaluation which provide guidance for oversight of QA or evaluation activities. It is recommended that such policies address the following issues’.

[5] Triggers for consideration of ethics review include:

  • Where the activity potentially infringes the privacy or professional reputation of participants, providers or organisations.
  • Secondary use of data – using data or analysis from QA or evaluation activities for another purpose.
  • Gathering information about the participant beyond that which is collected routinely. Information may include biospecimens or additional investigations.
  • Testing of non-standard (innovative) protocols or equipment.
  • Comparison of cohorts.

[6] Preferably prior to data collection commencing, but in the case of data collected prior to the adoption of this document, the check must occur prior to any use of the collected data.

[7] A Panel should be created for the purpose of conducting these reviews. Even though there are undeniable advantages from this panel being comprised in full (or at least primarily) of members of the institution’s HREC, it must be stressed to Panel members there are important and valid differences between academic research and evaluative practice. For this reason, a proportion of the Panel members should not be drawn from the HREC and should be experienced in the conduct of evaluative practice.

A template for the institutional policy and suggestions for the associated guidance material can be found in the AHRECS subscribers area https://www.patreon.com/posts/25446938.  Available for USD15/month patrons.

 

Contributors
Dr Gary Allen, Senior consultant AHRECS | Profile | gary.allen@ahrecs.com
Dr Mark Israel, Senior consultant AHRECS | Profile | mark.israel@ahrecs.com
Prof., Colin Thomson AM, Senior consultant AHRECS | Profile | colin.thomson@ahrecs.com

This post may be cited as:
Allen, G., Israel, M. and Thoman, C. (18 March 2019) Institutional approaches to evaluative practice. The Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/institutional-approaches-to-evaluative-practice

 

The Ethics of Evaluation Research0

 

Evaluation research is used to assess the value of such things as services, interventions, and policies. The term ‘evaluation research’ makes it seem homogeneous but in fact evaluation research draws on a range of theoretical perspectives and a wide variety of quantitative and qualitative methods. However, there are three things evaluation research usually does that set it apart from other kinds of research. It:

  1. asks what is working well and where and how improvements could be made;
  2. involves stakeholders; and
  3. offers practical recommendations for action.

The American Evaluation Association (AEA), with members from over 60 countries, has five ‘guiding principles’ which ‘reflect the core values of the AEA’ (2018):

Systematic inquiry: evaluators conduct data-based inquiries that are thorough, methodical, and contextually relevant.

Competence: evaluators provide skilled professional services to stakeholders.

Integrity: evaluators behave with honesty and transparency in order to ensure the integrity of the evaluation.

Respect for people: evaluators honour the dignity, well-being, and self-worth of individuals and acknowledge the influence of culture within and across groups.

Common good and equity: evaluators strive to contribute to the common good and advancement of an equitable and just society.

The question of how research ethics review processes should engage with evaluation research has not yet been definitively decided in many research institutions in Australia and New Zealand. Helen Kara’s article alerts us to the degree to which evaluation researchers encounters novel ethical issues. We shall explore some of the possible institutional approaches in a forthcoming Patreon resource.

This is unusual in being thorough – there is much more explanation in the document – and up to date. The Australasian Evaluation Society (AES) has Guidelines for the Ethical Conduct of Evaluations which were last revised in 2013. This is a much more discursive document – 13 pages to the AEA’s four – which offers guidance to evaluation commissioners as well as evaluation researchers. The AES guidelines also refer to and include Indigenous ethical principles and priorities. In particular, reciprocity is highlighted as a specific principle to be followed. This is another difference from the AEA document in which Indigenous evaluation and evaluators are not mentioned.
.

The United Nations Evaluation Group also specifies evaluation principles in its ethical guidelines (2008) but they are 10 years older than the AEA’s. Beyond these, there are few codes of ethics, or equivalent, readily available from national and international evaluation bodies. Also, evaluation research rarely comes within the purview of human research ethics committees unless it’s being conducted within a university or a health service. And books on evaluation research rarely mention ethics.
.

Recent research has shown that a proportion of evaluation researchers will assert that ethics does not apply to evaluation and that they have never encountered ethical difficulties in their work (Morris, 2015, p.32; Williams, 2016, p.545). This seems very odd to me, as I have been doing evaluation research for the last 20 years and I have encountered ethical difficulties in every project. It also seems worrying as I wonder whether the next generation of evaluation researchers are learning to believe that they do not need to think about ethics.
.

In my recent book, Research Ethics in the Real World (2018), I demonstrated that ethical issues exist at all stages of the research process, from the initial idea for a research question up to and including aftercare. This applies to evaluation research just as much as it does to any other kind of research. I also demonstrated that there are some ethical considerations at the macro level for evaluation research, such as funding, stakeholder involvement, and publishing.
.

Well-funded organisations or projects can allocate money for evaluation; poorly-funded ones cannot. This means that evaluation research is routinely done where funding is available rather than where evaluation is most needed. In the United Kingdom, where I am based, we have been undergoing an ideological programme of austerity involving massive cuts to public services over the last nine years. This has come from successive governments that have also prioritised evaluation research, funding expensive national ‘What Works’ centres on themes such as ageing, health, and childhood, right through the austerity years. Yet to the best of my knowledge there has been no evaluation of the impact of any service closure. This seems short-sighted at best – though it does illustrate my point that evaluation happens where money is being spent. Also, an explicit purpose of evaluation research is often to provide evidence to use in future funding negotiations, which means that results are effectively expected to be positive. This means that pressures associated with funding can introduce bias into evaluation research right from the start. Combine this with an evaluator who needs to be paid for their work in order to pay their own bills, and you have a situation that is well on its way to being a money-fuelled toxic mess.
.

Involving stakeholders is a key principle of evaluation research. The AEA define ‘stakeholders’ as ‘individuals, groups, or organizations served by, or with a legitimate interest in, an evaluation including those who might be affected by an evaluation’ and suggest that evaluators should communicate with stakeholders about all aspects of the evaluation (2018). Again, here, the use of a single word implies homogeneity when in fact evaluation stakeholders may range from Government ministers to some of the most marginalised people in society. This can make involving them difficult: some will be too busy to be involved, some will be impossible to find, and some will not want to be involved. Which leaves evaluators caught between an impractical principle and an unprincipled practice. There is some good practice in stakeholder involvement (Cartland, Ruch-Ross and Mason, 2012:171-177), but there is also a great deal of tokenism which is not ethical (Kara, 2018:63). Also, even when all groups of stakeholders are effectively engaged, this can bring new ethical problems. For example, their values and interests may be in conflict which can be challenging to manage, particularly alongside the inevitable power imbalances. Even if stakeholders work well together such that power imbalances are reduced within the evaluation, it is unlikely those reductions will carry over into the wider world.
.

Commissioners of evaluation are reluctant to publish reports unless they are overwhelmingly positive. I had an example of this some years ago when I evaluated an innovative pilot project tackling substance misuse. From the start my client said they were keen to publish the evaluation report. I worked with stakeholders to collect and analyse my data and made around 10 recommendations, all but one of which said words to the effect of ‘good job, carry on’. Just one recommendation offered constructive criticism of one aspect of the project and made suggestions for improvement. My client asked me to remove that recommendation; I thought about it carefully but in the end refused because it was fully supported by the evaluation data. We had two more meetings about it and in the end, my client decided that they would not publish the report. This was unfortunate because others could have learned from the evaluation findings and methods, and because failure to publish increases the risk of work being duplicated which results in public funds being wasted. Sadly, as a commissioned researcher, I had signed away my intellectual property so it was out of my hands. Everyone involved in evaluation research can tell these kinds of tales. However, it is too simplistic to suggest that publication should always be a requirement. In some cases, the publication could be harmful, such as when a critical evaluation might lead to the economy of service closure, to the detriment of service users and staff, rather than to more resource-intensive improvements in policy and practice. But overall, unless there is a good reason to withhold a report, the publication is the ethical route.
.

As the AEA principles suggest, evaluation researchers are in a good position to help increase social justice by influencing evaluation stakeholders to become more ethical. I would argue that there are several compelling reasons, outlined above, why all evaluation researchers should learn to think and act ethically.
.

References

American Evaluation Association (2018) Guiding Principles. Washington, DC: American Evaluation Association.

Australasian Evaluation Society (2013) Guidelines for the Ethical Conduct of Evaluations. www.aes.asn.au

Cartland, J., Ruch-Ross, H. and Mason, M. (2012) Engaging community researchers in evaluation: looking at the experiences of community partners in school-based projects in the US. In Goodson, L. and Phillimore, J. (eds) Community Research for Participation: From Theory to Method, pp 169-184. Bristol, UK: Policy Press.

Kara, H. (2018) Research Ethics in the Real World: Euro-Western and Indigenous Perspectives. Bristol, UK: Policy Press.

Morris, M. (2015) Research on evaluation ethics: reflections and an agenda. In Brandon, P. (ed) Research on evaluation: new directions for evaluation, 31–42. Hoboken, NJ: Wiley.

United Nations Evaluation Group (2008) UNEG Ethical Guidelines for Evaluation. http://www.unevaluation.org/document/detail/102

Williams, L. (2016) Ethics in international development evaluation and research: what is the problem, why does it matter and what can we do about it? Journal of Development Effectiveness 8(4) 535–52. DOI: 10.1080/19439342.2016.1244700.
.

Recommended reading

Morris, M. (ed) (2008) Evaluation Ethics for Best Practice: Cases and Commentaries. New York, NY: The Guilford Press.

Donaldson, S. and Picciotto, R. (eds) (2016) Evaluation for an Equitable Society. Charlotte, NC: Information Age Publishing, Inc.

Contributor
Helen Kara, Director, We Research It Ltd | profilehelen@weresearchit.co.uk

This post may be cited as:
Kara, H. (26 January 2019) The Ethics of Evaluation Research. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-ethics-of-evaluation-research

REAlising a collegiate Research Ethics Adviser network0

 

By
Dr Gary Allen| Senior Consultant AHRECS| gary.allen@ahrecs.com
Dr Mark Israel| Senior Consultant AHRECS| mark.israel@ahrecs.com

Our research ethics consultancy activity in recent years has involved us working with a broad range of research institutions. Despite diversity in size, budget, age, geographical reach and mission, in some respects institutions face similar challenges, frustrations and risks. In relation to research ethics, the recurrent themes that we have noticed include:

  1. There being insufficient time and capacity to conduct professional development activities, especially activities focussed on the needs and experiences of schools, departments, research centres and research offices.
  2. A legacy of an adversarial climate, and distrust, between researchers, research ethics reviewers and the research office (Israel et al., 2016).
  3. Serious budgetary constraints.
  4. Difficulty in recruiting new members of the research ethics committee, especially from areas that do not have a long-standing connection to human research ethics or have had difficult experiences with research ethics review. This may be compounded by university initiatives to reshape their workforce in a way that prioritises research income and outputs.
  5. Review feedback needing to be detailed and long, but often receiving poor and aggressive responses.
  6. Difficulty in eliciting constructive, or sometimes any, response to internal or external consultations from some parts of the institution.

We have developed a strategy (Allen and Israel, 2018) that can form part of the response to these matters as part of a commitment to resourcing reflective practice. It draws on existing resources, fosters a better relationship between reviewers and researchers, helps target constructive feedback, builds the capacity of researchers to engage in ethical research, and prepares a new cohort of researchers to join the human research ethics committee.

SHORT BRIEFING PAPER ON REA NETWORKS

https://www.patreon.com/posts/24928731

Available to USD3/month patrons

A network of collegiate Research Ethics Advisers (REAs) enables a group of experienced researchers to act as a source of collegiate advice to other researchers in their area. Among the roles of a REA should be:
.

  1. Involvement in facilitating professional development workshops and other activities in their area. This might initially involve them introducing sessions run by the university on particular aspects of research ethics pertinent to specific disciplines, commenting on the issues raised and engaging in discussion. Eventually, the entire activity might be facilitated by the REA. This strategy distributes leadership of human research ethics, and reinforces its important to quality research in their area, not ‘just’ a matter of complying with externally imposed rules.
  2. When applicants are sent complicated feedback, they might usefully be directed to consult their local REA before responding. This allows the review body to leave long written explanations to be complemented and explained by a more personal verbal explanation, and it should improve confidence that the applicant’s response will resolve the matter, rather than requiring another round of feedback.
  3. The REA network can serve as a conduit for information between researchers and reviewers, providing early warning to an institution when clashes might arise over methodology or changes in regulation.

.
Having assisted a number of institutions to establish, appoint, provide professional development and support to REA networks, we have found the optimal appointment level to be at the school/team/department level with the number of REAs recruited from an area reflecting the number of researchers in that area who conduct human research.
.

In our Patreon area, we have included:
.

A briefing note about a standard operating procedure for a REA network with the heading Basic Structure, which provides a plan for the establishment and operation of a collegiate network.

.
A subscription of USD 5/month will provide access to this material. A subscription of USD 15/month will provide access to all our Patreon materials. Contact us at Patreon@ahrecs.com to discuss.
.

AHRECS would of course be delighted to help you turn those shells into documents tailored to your institution’s needs. We are also able to assist in the establishment and professional development of a collegiate REA network. Contact us at REA@ahrecs.comto discuss.

Figure 1 – A version of this image, which is not watermarked, is available from https://www.patreon.com/ahrecs with a USD3/month subscription.

References:

Allen, G and Israel, M (2018) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R and Tolich, M (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage. pp.276-288.

Israel, M, Allen, G and Thomson, C (2016) Australian Research Ethics Governance: Plotting the Demise of the Adversarial Culture. In van den Hoonaard, W and Hamilton, A (eds) The Ethics Rupture: Exploring Alternatives to Formal Research-Ethics Review. Toronto: University of Toronto Press. pp 285-316. http://www.utppublishing.com/The-Ethics-Rupture-Exploring-Alternatives-to-Formal-Research-Ethics-Review.html
.

This post may be cited as:
Allen, G. &.Israel, M. (25 February 2019) REAlising a collegiate Research Ethics Adviser network. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/realising-a-collegiate-research-ethics-adviser-network

Consumer Co-design for End of Life Care Discharge Project0

 

In this issue, we are publishing an account of an end-of-life project in whose design there are some features that add to its ethical interest. Many of us are familiar with institutional policies about consumer engagement in human research and have served on project reference groups, but perhaps have less experience with the successful – and ethical – implementation of these. This project may add some valuable understanding of these matters, including:
.

  • What insights do the design and information groups offer into the practice of research co-design?
  • Do those insights help to clarify the distinction between co-design and participatory action research?
  • Do those groups have advantages in demonstrating the project’s fulfilment of ethical principles of beneficence, respect or justice
  • Could those groups have a role in overseeing the ethical conduct of a project?
  • Given the subject of this research project, what sort of projects might make best use of groups such as those in this project?

We have invited the author and the research team to provide some follow-up reflection on issues such as these as the project progresses and is completed

.
The End of Life Care Discharge Planning Project is led by Associate Professor Laurie Grealish from Griffith University. This research project partners with consumers at all stages, allowing consumers significant contribution. As part of the Queensland Health End of Life Care Strategy, Gold Coast Health is developing a process to support discharge for people near end of life who would like to die at home. A Productivity Commission Report in 2017 noted that although over 70% of Australians prefer to die at home, less than 10% do. This is attributed to the need for improvement in the transition between hospital and community care.

The outcomes of this study are expected to include: (1) an evidence-based discharge process and infrastructure to enhance the transition from hospital [medical wards] to home for end of life care; (2) end of life care information brochure for patients and their family carers; (3) stakeholder feedback to indicate that the process is feasible and satisfactory; and (4) a health service and non-government organisational partnership network to monitor the discharge process and enhance future integrated models of end of life care. Ethical approval has been granted by the Gold Coast Health Human Research Ethics Committee and Griffith University Human Research Ethics Committee.

For the research design stage, three groups were established: 1) Project reference group, 2) Project design group, and, 3) Project information group.

1. Project reference group – The aim of the project reference group is to consider the analysed data and reports from the sub-committees, provide advice on, as well as monitor, implementation strategies. This group is led by Associate Professor Laurie Grealish and has membership from a wide range of stakeholders including hospital clinicians and managers, researchers, community groups, non-government organisations and consumers.

2. Project design group – The purpose of this group is to design an evidence-based discharge process to enable people near the end of life to return home to die if this is their wish. Dr Kristen Ranse from Griffith University is the Chair of this group and the membership of the group includes representatives from Gold Coast Health, consumers, and non-government organisations.

3. Project information group – Led by Dr Joan Carlini from Griffith University, this group is to provide expert advice about what information people need as they consider dying at home. It was identified early by the group that there is an overwhelming amount of information available online and in brochures, leading to confusion. Since this group has stakeholders from a wide range of representatives from health care providers, nongovernment organisations, community groups as well as consumers, there has been a healthy generation of discussions. The consumers on the team led the way in selecting pertinent information and producing a draft document. This was then further modified by the committee, ensuring that the booklet is concise, but also a thorough source of information for end of life care.

The next stage of the project runs from Janulary to July 209, with implementation, data collection and anlaysis, and dissemination of finding.

Contributor
Dr. Joan Carlini, Lecturer, Department of Marketing, Griffith University | Griffith University profile, LinkedIn profile (log in required), Twitter – @joancarlini |

This post may be cited as:
Carlini, J. (18 January 2018) Consumer Co-design for End of Life Care Discharge Project. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/consumer-co-design-for-end-of-life-care-discharge-project