ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Privacy

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The research use of online data/web 2.0 comments0

 

Does it require research ethics review and specified consent?

Dr Gary Allen
AHRECS Senior Consultant

The internet is a rich source of information for researchers. On the Web 2.0 we see extensive commentary on numerous life matters, which may be of interest to researchers in a wide range of (sub)disciplines. Research interest in these matters frequently prompts the following questions –Can I use that in my project? Hasn’t that already been published? Is research ethics review required? Is it necessary to obtain express consent for the research use?

It’s important to recognise that these questions aren’t posed in isolation. Cases like the OkCupid data scraping scandal, the Ashley Madison hack, Emotional Contagion, Cambridge Analytica and others provide a disturbing context.  At a time when the use of the internet and social media is startingly high (Nielsen 2019, Australian Bureau of Statistics 2018, commentaries such as the WebAlive blog 2019), there is also significant distrust of the platforms people are using. Consequently, there are good reasons for researchers and research ethics reviewers to be cautious about use of existing material for research, even if the terms and conditions of a site/platform specifically discuss research.

Like many ethics questions, there isn’t a single simple answer that is correct all the time.  The use of some kinds of data for research may not meet the National Statement’s definition of human research. Use of other kinds of data may meet that definition but will be exempt from review and so not require explicit consent. Use of other kinds of data or other uses of data that involves no more than low risk can be reviewed outside an HREC meeting and others will actually have to be considered at an HREC meeting.

AHRECS proposes a three-part test, which can be applied to individual projects to test whether a proposed use of internet data is human research and needs ethics review and this will also guide whether explicit and project-specific consent is required. If this test is formally adopted by an institution and by its research ethics committees, it would provide a transparent, consistent, and predictable way to judge these matters.

You can find a word copy of the questions, as well as a png and pdf copy of the flow diagram in our subscribers’ area.
.

For institutions
https://ahrecs.vip/flow…
$350/year
.

For individuals
https://www.patreon.com/posts/flow…
USD10/month
.

 

For any questions email enquiry@ahrecs.com

Part One of this test is whether the content of a site or platform is publicly available. One component of this test is whether the researcher will be using scraping, spoofing or hacking of the site/platform to obtain information.
.

Part Two of the test relates to whether individuals have consented and will be reasonably identifiable from the data and its proposed research use and whether there are risks to those individuals.  A component of this test is exploring whether an exemption from the consent requirement is necessary (i.e. as provided for by paragraphs 2.3.9 -12 of the National Statement and are lawful under any privacy regulation that applies).

Part Three of the test relates to how the proposed project relates to the national human research ethics guidelines – the National Statement – and whether there are any matters that must be considered by a human research ethics committee.  For example, Section 3 of the National Statement (2007 updated 2018) discusses some methodological matters and Section 4 some potential participant issues that must be considered by an HREC.

Individually, any one of these parts could determine that review and consent is required. But meeting all three parts of the test is necessary to indicate that the work is exempt before a project can be exempted from review.

Even if the tests indicate review/consent is required, that doesn’t mean the research is ethically problematic, just a project requires for more due consideration.

The implication of this is that not all research based upon online comments or social media posts can be exempted from review but, conversely, not all such work must be ethically reviewed.  The approach that should be taken depends upon project-specific design matters.  A strong and justifiable institutional process will have nuanced criteria on these matters.  Failing to establish transparent and predictable policies would be a serious lapse in an important area of research.

Booklet 37 of the Griffith University Research Ethics Manual now incorporates this three-part test.

In the subscribers’ area you will find a suggested question set for the three-part test, as well as a graphic overview of the work flow for the questions.

It is recommended institutions adopt their own version of the test, including policy positions with regard to the use of hacked or scraped data, or the research use of material in a manner at odds with a site/platform’s rules.

References

Australian agency to probe Facebook after shocking revelation – The New Daily. Accessed 16/11/19 from https://thenewdaily.com.au/news/world/2018/04/05/facebook-data-leak-australia/

Australian Bureau of Statistics (2018) 8153.0 – Internet Activity, Australia, June 2018. Retrieved from https://www.abs.gov.au/ausstats/abs@.nsf/mf/8153.0/ (accessed 27 September 2019)

Chamber, C. (2014 01 July) Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach? The Guardian. Accessed 16/11/19 from http://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach

Griffith University (Updated 2019) Griffith University Research Ethics Manual (GUREM). Accessed 16/11/19 from https://www.griffith.edu.au/research/research-services/research-ethics-integrity/human/gurem

McCook, A. (2016 16 May) Publicly available data on thousands of OKCupid users pulled over copyright claim.  Retraction Watch. Accessed 16/11/19 from http://retractionwatch.com/2016/05/16/publicly-available-data-on-thousands-of-okcupid-users-pulled-over-copyright-claim/

Nielsen (2019, 26 July) TOTAL CONSUMER REPORT 2019: Navigating the trust economy in CPG. Retrieved from https://www.nielsen.com/us/en/insights/report/2019/total-consumer-report-2019/ (accessed 27 September 2019)

NHMRC (2007 updated 2018) National Statement on Ethical Conduct in Human Research. Accessed 17/11/19 from https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018

Satran, J. (2015 02 September) Ashley Madison Hack Creates Ethical Conundrum For Researchers. Huffington Post. Accessed 16/11/19 from http://www.huffingtonpost.com.au/entry/ashley-madison-hack-creates-ethical-conundrum-for-researchers_55e4ac43e4b0b7a96339dfe9?section=australia&adsSiteOverride=au

WebAlive (2019 24 June) The State of Australia’s Ecommerce in 2019 Retrieved from https://www.webalive.com.au/ecommerce-statistics-australia/ (accessed 27 September 2019).

Recommendations for further reading

Editorial (2018 12 March) Cambridge Analytica controversy must spur researchers to update data ethics. Nature. Accessed 16/11/19 from https://www.nature.com/articles/d41586-018-03856-4?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20180329

Neuroskeptic (2018 14 July) The Ethics of Research on Leaked Data: Ashley Madison. Discover. Accessed 16/11/19 from http://blogs.discovermagazine.com/neuroskeptic/2018/07/14/ethics-research-leaked-ashley-madison/#.Xc97NC1L0RU

Newman, L. (2017 3 July) WikiLeaks Just Dumped a Mega-Trove of CIA Hacking Secrets. Wired Magazine. Accessed 16/11/19 from https://www.wired.com/2017/03/wikileaks-cia-hacks-dump/

Weaver, M (2018 25 April) Cambridge University rejected Facebook study over ‘deceptive’ privacy standards. TheGuardian. Accessed 16/11/19 from https://www.theguardian.com/technology/2018/apr/24/cambridge-university-rejected-facebook-study-over-deceptive-privacy-standards

Woodfield, K (ed.) (2017) The Ethics of Online Research. Emerald Publishing. https://doi.org/10.1108/S2398-601820180000002004

Zhang, S. (2016 20 May ) Scientists are just as confused about the ethics of big-data research as you. Wired Magazine. Accessed 16/011/19 from http://www.wired.com/2016/05/scientists-just-confused-ethics-big-data-research/

Competing interests

Gary is the principal author of the Griffith University Research Ethics Manual (GUREM) and receives a proportion of license sales.

This post may be cited as:
Allen, G. (23 November 2019) The research use of online data/web 2.0 comments. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-research-use-of-online-data-web-2-0-comments

Ethics, Security and Privacy – the Bermuda Triangle of data management?0

 

Malcolm Wolski and Andrew Bowness
Griffith University

 

To manage sensitive research data appropriately, ethics, security and privacy requirements need to be considered. Researchers are traditionally familiar with ethics, but often have not considered the privacy and security pieces of the puzzle. Our reasons for making this statement are:

  • IT products used in research change rapidly
  • Legislation changes rapidly and there are jurisdictional issues
  • Most researchers are not legal or IT experts
  • No one teaches them enough basics to know what is risky behaviour

The recent revision to the Australian Code for the Responsible Conduct of Research (2018) on Management of Data and Information in Research highlights that it is not just the responsibility of a university to use best practice, but it is also the responsibility of the researcher. The responsible conduct of research includes within its scope the appropriate generation, collection, access, use, analysis, disclosure, storage, retention, disposal, sharing and re-use of data and information. Researchers have a responsibility to make themselves aware of the requirements of any relevant codes, legislation, regulatory, contractual or consent agreements, and to ensure they comply with them.

It’s a complex world

However, this is becoming an increasingly more complex environment for researchers. First, privacy legislation is dependent on jurisdiction of participants. For one example, a research project involving participants in Queensland is impacted by not only the Australian Privacy Act but also the Queensland version (Information Privacy Act 2009 Qld), and, if a participant or collaborator is an EU citizen, the General Data Protection Regulation (EU GDPR).

Secondly, cybersecurity and information security activities in universities have increased dramatically in recent times because of publicised data breaches and the impact of data breach legislation. If your research involves foreign citizens, you may also find foreign legislation impacting the type of response required.

Thirdly, funding agencies, such as government departments are increasingly specifying security and privacy requirements in tender responses and contracts.

These are having an impact on research project governance and practices, particularly for projects where the researcher has identified they are working with sensitive data. While the conversation typically focuses on data identified under the privacy acts as sensitive (e.g. Personally Identifiable Information (Labelled) under the Australian Privacy Act), researchers handle a range of data they may wish to treat as sensitive, whether for contractual reasons (e.g. participant consent, data sharing agreements) or for other reasons (e.g. ethical or cultural).

We have noticed an increasing trend within institutions where researchers are being required to provide more information on how they manage data as specified in a proposal or in a data sharing agreement. This typically revolves around data privacy and security, which is different from the ethics requirements.

What does “security” and “privacy” mean to the practitioner

IT security is more about minimising attack points though process or by using IT solutions to prevent or minimise the impacts of hostile acts or alternatively minimise impacts though misadventure (e.g. leaving a laptop on a bus). Data security is more in the sphere of IT and not researchers. This is reflected in which software products, systems and storage are “certified” to be safely used for handling and managing data classified as sensitive. IT usually also provides the identity management systems used to share data.

We have also noticed that researchers are relying on software vendors’ website claims about security and privacy which is problematic because most cloud software is running from offshore facilities which do not comply with Australian privacy legislation. Unless you are an expert in both Australian legislation and cybersecurity you need to rely on the expertise of your institutional IT and cybersecurity teams to verify vendors’ claims.

In the current environment, data privacy is more about mandated steps and activities designed to force a minimal set of user behaviours to prevent harm caused through successful attacks or accidental data breaches. It usually involves punishment to force good behaviour (e.g. see Data Breach Legislation for late reporting). Typically, data privacy is more the responsibility of the researcher. It usually involves governance processes (e.g. who has been given access to what data) or practices (e.g. what software products the team actually uses to share and store data).

What we should be worrying about

The Notifiable Data Breaches Statistics Report: 1 April to 30 June 2019 highlighted that only 4% of breaches, out of 254 notifications, were due to system faults, but 34% were due to human error and 62% due to malicious or criminal acts. Based on these statistics, the biggest risk associated with data breaches is where the data is in the hands of the end-user (i.e. the researcher) not with the IT systems themselves.

We argue the risks are also greater in research than the general population because of a number of factors such as the diversity of data held (e.g. data files, images, audio etc), the fluidity of the team membership, teams often being made up of staff across department and institutional boundaries, mobility of staff, data collection activities offsite, and the range of IT products needed in the research process.

For this discussion, the focus is on the governance and practice factor within the research project team and how this relates back to the ethics requirements when it has been highlighted that the project will involve working with sensitive data.

Help!!

We have worked closely with researcher groups for many years and have noticed a common problem. Researchers are confronted with numerous legislative, regulatory, policy and contractual requirements all written in terminology and language that bears little resemblance with what happens in practice. For example, to comply with legislation:

  • what does sending a data file “securely” over the internet actually look like in practice and which IT products are “safe”?
  • Is your university-provided laptop with the standard institutional image certified as “safe” for data classified as private? How do you know?
  • Is your mobile phone a “safe” technology to record interviews or images classified as private data? What is a “safe” technology for field work?

Within the university sector a range of institutional business units provide support services. For example, IT may provide advice assessing the security and privacy compliance of software, networked equipment or hardware infrastructure and the library may provide data management advice covering sensitive data. At our institution, Griffith University, the eResearch Services and the Library Research Services teams have been working closely with research groups to navigate their way through this minefield to develop standard practices fit for their purpose.

What we think is the best way forward

Our approach is to follow the Five Safes framework which has also been adopted by the Office of the National Data Commissioner. For example:

  • Safe People Is the research team member appropriately authorised to access and use specified data i.e. do you have a documented data access plan against team roles and a governance/induction process to gain access to restricted data?
  • Safe Projects Is the data to be used for an appropriate purpose i.e. do you have copies of the underlying data sharing/consent agreements, contracts, documents outlining ownership and licensing rights?
  • Safe Settings Does the access environment prevent unauthorised use i.e. do IT systems and processes support this and are access levels checked regularly?
  • Safe Data Has appropriate and sufficient protection been applied to the data i.e. what is it and does it commensurate with the level of risk involved?
  • Safe Outputs Are the statistical results non-disclosive or have you checked rights/licensing issues?

Expect to see a lot more of the Five Safes approach in the coming years.

References

Hardy, M. C., Carter, A., & Bowden, N. (2016). What do postdocs need to succeed? A survey of current standing and future directions for Australian researchers.2, 16093. https://doi.org/10.1057/palcomms.2016.93

Meacham, S. (2016). The 2016 ASMR Health and Medical Research Workforce Survey. Australian Society of Medical Research.

Contributors

Malcolm Wolski, Director eResearch Services, Griffith University

Andrew Bowness, Manager, Support Services, eResearch Services, Griffith University

This post may be cited as:
Wolski, M. and Bowness, A. (29 September 2019) Ethics, Security and Privacy – the Bermuda Triangle of data management?. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/ethics-security-and-privacy-the-bermuda-triangle-of-data-management

Complainant anonymity in misconduct proceedings depends on the forum0

 

Prof. Colin Thomson AM, Senior Consultant, AHRECS

This news item, while identifying the fact that the decision relates to court proceedings and not to university processes, leaves out some informative facts.

Two members of the La Trobe academic staff lodged complaints about bullying by Professor Keyzer, whom the university suspended.  In turn, Professor Keyzer commenced proceedings against the university in the Federal Court to challenge the way it had handled the complaint. The complainants were not parties in these proceedings.  However, they sought to intervene in the case (Keyzer v La Trobe University [2019] FCA 646) to request that the court order that their names not be published.  They were represented at the hearing but both the university and Professor Keyzer were not.

The court needed to decide whether to allow the complainants to intervene in the case and, if they were allowed, whether there was a case to suppress their names in the court hearing and record.  The court allowed them to intervene but did not order suppression of their names.

The question of suppression of the complainants’ names raised, and was ultimately decided on, the fundamental difference between proceedings in institutional investigations and those in superior courts.  That difference is that publicity of court proceedings is seen to be central to the administration of justice in Australia, and characteristic of the English common law tradition that informs Australian court proceedings.

In concluding his comprehensive judgement, that contains a thorough account of the open justice principle at stake and the exceptions that have been permitted, Anastassiou J said:

I echo the sympathy expressed by Mahoney JA for the “great pain” that is often felt by those subjected to publicity surrounding court proceedings. However, the power conferred by s 37AF is constrained by the grounds under s 37AG and by the overlay of priority to be given to the public interest served by open justice pursuant to s 37AE. In my view, s 37AG(1)(a) makes clear that the public interest served by open justice may only be qualified where it is necessary in the strictest sense to prevent prejudice to the proper administration of justice. The legitimate personal interest of the interveners in maintaining their privacy in connection with the complaints process is not sufficient to conclude that the protection of their interests is necessary to prevent prejudice to the administration of justice.

The following is the text of sections referred to:

37AE  Safeguarding public interest in open justice

In deciding whether to make a suppression order or non-publication order, the Court must take into account that a primary objective of the administration of justice is to safeguard the public interest in open justice.

37AF  Power to make orders

(1)  The Court may, by making a suppression order or non-publication order, on grounds permitted by this Part, prohibit or restrict the publication or other disclosure of:

    • information tending to reveal the identity of or otherwise concerning any party to or witness in a proceeding before the Court or any person who is related to or otherwise associated with any party to or witness in a proceeding before the Court; or

(b) information that relates to a proceeding before the Court and is:

(i)   information that comprises evidence or information about evidence; or

(ii)   information obtained by the process of discovery; or

(iii)   information produced under a subpoena; or

(iv)   information lodged with or filed in the Court.

(2)  The Court may make such orders as it thinks appropriate to give effect to an order under subsection (1).

37AG  Grounds for making an order

(1)  The Court may make a suppression order or non-publication order, on one or more of the following grounds:

(a)  the order is necessary to prevent prejudice to the proper administration of justice;

(b)  the order is necessary to prevent prejudice to the interests of the Commonwealth or a State or Territory in relation to national or international security;

(c)  the order is necessary to protect the safety of any person;

(d)  the order is necessary to avoid causing undue distress or embarrassment to  a party  to or witness in a criminal proceeding involving an offence of a sexual nature (including an act of indecency).

(2)  A suppression order or non-publication order must specify the ground or grounds on which the order is made.

Concluding observation

Within the scope of a university’s own investigation or disciplinary procedures, the assurance of confidentiality in internal procedures and policies can be relied upon. However, when proceedings in relation to staff misconduct are brought in an Australian superior court, such as the Federal Court or a State Supreme Court, the priorities among principles changes. In those courts, the principle of preserving the “proper administration of justice” is fundamental and has priority over the principles that governed the conduct of institutional proceedings. In such courts, the grounds on which exceptions can be made to that principle, such as orders that suppress the name of a person, as this case illustrates, are few.

This post may be cited as:
Thomson, C. (24  May 2019) Complainant anonymity in misconduct proceedings depends on the forum. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/complainant-anonymity-in-misconduct-proceedings-depends-on-the-forum

The Ethics of Evaluation Research0

 

Evaluation research is used to assess the value of such things as services, interventions, and policies. The term ‘evaluation research’ makes it seem homogeneous but in fact evaluation research draws on a range of theoretical perspectives and a wide variety of quantitative and qualitative methods. However, there are three things evaluation research usually does that set it apart from other kinds of research. It:

  1. asks what is working well and where and how improvements could be made;
  2. involves stakeholders; and
  3. offers practical recommendations for action.

The American Evaluation Association (AEA), with members from over 60 countries, has five ‘guiding principles’ which ‘reflect the core values of the AEA’ (2018):

Systematic inquiry: evaluators conduct data-based inquiries that are thorough, methodical, and contextually relevant.

Competence: evaluators provide skilled professional services to stakeholders.

Integrity: evaluators behave with honesty and transparency in order to ensure the integrity of the evaluation.

Respect for people: evaluators honour the dignity, well-being, and self-worth of individuals and acknowledge the influence of culture within and across groups.

Common good and equity: evaluators strive to contribute to the common good and advancement of an equitable and just society.

The question of how research ethics review processes should engage with evaluation research has not yet been definitively decided in many research institutions in Australia and New Zealand. Helen Kara’s article alerts us to the degree to which evaluation researchers encounters novel ethical issues. We shall explore some of the possible institutional approaches in a forthcoming Patreon resource.

This is unusual in being thorough – there is much more explanation in the document – and up to date. The Australasian Evaluation Society (AES) has Guidelines for the Ethical Conduct of Evaluations which were last revised in 2013. This is a much more discursive document – 13 pages to the AEA’s four – which offers guidance to evaluation commissioners as well as evaluation researchers. The AES guidelines also refer to and include Indigenous ethical principles and priorities. In particular, reciprocity is highlighted as a specific principle to be followed. This is another difference from the AEA document in which Indigenous evaluation and evaluators are not mentioned.
.

The United Nations Evaluation Group also specifies evaluation principles in its ethical guidelines (2008) but they are 10 years older than the AEA’s. Beyond these, there are few codes of ethics, or equivalent, readily available from national and international evaluation bodies. Also, evaluation research rarely comes within the purview of human research ethics committees unless it’s being conducted within a university or a health service. And books on evaluation research rarely mention ethics.
.

Recent research has shown that a proportion of evaluation researchers will assert that ethics does not apply to evaluation and that they have never encountered ethical difficulties in their work (Morris, 2015, p.32; Williams, 2016, p.545). This seems very odd to me, as I have been doing evaluation research for the last 20 years and I have encountered ethical difficulties in every project. It also seems worrying as I wonder whether the next generation of evaluation researchers are learning to believe that they do not need to think about ethics.
.

In my recent book, Research Ethics in the Real World (2018), I demonstrated that ethical issues exist at all stages of the research process, from the initial idea for a research question up to and including aftercare. This applies to evaluation research just as much as it does to any other kind of research. I also demonstrated that there are some ethical considerations at the macro level for evaluation research, such as funding, stakeholder involvement, and publishing.
.

Well-funded organisations or projects can allocate money for evaluation; poorly-funded ones cannot. This means that evaluation research is routinely done where funding is available rather than where evaluation is most needed. In the United Kingdom, where I am based, we have been undergoing an ideological programme of austerity involving massive cuts to public services over the last nine years. This has come from successive governments that have also prioritised evaluation research, funding expensive national ‘What Works’ centres on themes such as ageing, health, and childhood, right through the austerity years. Yet to the best of my knowledge there has been no evaluation of the impact of any service closure. This seems short-sighted at best – though it does illustrate my point that evaluation happens where money is being spent. Also, an explicit purpose of evaluation research is often to provide evidence to use in future funding negotiations, which means that results are effectively expected to be positive. This means that pressures associated with funding can introduce bias into evaluation research right from the start. Combine this with an evaluator who needs to be paid for their work in order to pay their own bills, and you have a situation that is well on its way to being a money-fuelled toxic mess.
.

Involving stakeholders is a key principle of evaluation research. The AEA define ‘stakeholders’ as ‘individuals, groups, or organizations served by, or with a legitimate interest in, an evaluation including those who might be affected by an evaluation’ and suggest that evaluators should communicate with stakeholders about all aspects of the evaluation (2018). Again, here, the use of a single word implies homogeneity when in fact evaluation stakeholders may range from Government ministers to some of the most marginalised people in society. This can make involving them difficult: some will be too busy to be involved, some will be impossible to find, and some will not want to be involved. Which leaves evaluators caught between an impractical principle and an unprincipled practice. There is some good practice in stakeholder involvement (Cartland, Ruch-Ross and Mason, 2012:171-177), but there is also a great deal of tokenism which is not ethical (Kara, 2018:63). Also, even when all groups of stakeholders are effectively engaged, this can bring new ethical problems. For example, their values and interests may be in conflict which can be challenging to manage, particularly alongside the inevitable power imbalances. Even if stakeholders work well together such that power imbalances are reduced within the evaluation, it is unlikely those reductions will carry over into the wider world.
.

Commissioners of evaluation are reluctant to publish reports unless they are overwhelmingly positive. I had an example of this some years ago when I evaluated an innovative pilot project tackling substance misuse. From the start my client said they were keen to publish the evaluation report. I worked with stakeholders to collect and analyse my data and made around 10 recommendations, all but one of which said words to the effect of ‘good job, carry on’. Just one recommendation offered constructive criticism of one aspect of the project and made suggestions for improvement. My client asked me to remove that recommendation; I thought about it carefully but in the end refused because it was fully supported by the evaluation data. We had two more meetings about it and in the end, my client decided that they would not publish the report. This was unfortunate because others could have learned from the evaluation findings and methods, and because failure to publish increases the risk of work being duplicated which results in public funds being wasted. Sadly, as a commissioned researcher, I had signed away my intellectual property so it was out of my hands. Everyone involved in evaluation research can tell these kinds of tales. However, it is too simplistic to suggest that publication should always be a requirement. In some cases, the publication could be harmful, such as when a critical evaluation might lead to the economy of service closure, to the detriment of service users and staff, rather than to more resource-intensive improvements in policy and practice. But overall, unless there is a good reason to withhold a report, the publication is the ethical route.
.

As the AEA principles suggest, evaluation researchers are in a good position to help increase social justice by influencing evaluation stakeholders to become more ethical. I would argue that there are several compelling reasons, outlined above, why all evaluation researchers should learn to think and act ethically.
.

References

American Evaluation Association (2018) Guiding Principles. Washington, DC: American Evaluation Association.

Australasian Evaluation Society (2013) Guidelines for the Ethical Conduct of Evaluations. www.aes.asn.au

Cartland, J., Ruch-Ross, H. and Mason, M. (2012) Engaging community researchers in evaluation: looking at the experiences of community partners in school-based projects in the US. In Goodson, L. and Phillimore, J. (eds) Community Research for Participation: From Theory to Method, pp 169-184. Bristol, UK: Policy Press.

Kara, H. (2018) Research Ethics in the Real World: Euro-Western and Indigenous Perspectives. Bristol, UK: Policy Press.

Morris, M. (2015) Research on evaluation ethics: reflections and an agenda. In Brandon, P. (ed) Research on evaluation: new directions for evaluation, 31–42. Hoboken, NJ: Wiley.

United Nations Evaluation Group (2008) UNEG Ethical Guidelines for Evaluation. http://www.unevaluation.org/document/detail/102

Williams, L. (2016) Ethics in international development evaluation and research: what is the problem, why does it matter and what can we do about it? Journal of Development Effectiveness 8(4) 535–52. DOI: 10.1080/19439342.2016.1244700.
.

Recommended reading

Morris, M. (ed) (2008) Evaluation Ethics for Best Practice: Cases and Commentaries. New York, NY: The Guilford Press.

Donaldson, S. and Picciotto, R. (eds) (2016) Evaluation for an Equitable Society. Charlotte, NC: Information Age Publishing, Inc.

Contributor
Helen Kara, Director, We Research It Ltd | profilehelen@weresearchit.co.uk

This post may be cited as:
Kara, H. (26 January 2019) The Ethics of Evaluation Research. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-ethics-of-evaluation-research

0