ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact matches only
Search into
Filter by Categories
Research ethics committees
Research integrity

Resource Library

Research Ethics MonthlyAbout Us

Methodology

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Ethical use of visual social media content in research publications2

 

At a research ethics workshop at the 2015 CSCW conference (Fiesler et al., 2015), researchers in our community respectfully disagreed about using public social media data for research without the consent of those who had posted the material. Some argued that researchers had no obligation to gain consent from each person whose data appeared in a public social media dataset. Others contended that, instead, people should have to explicitly opt in to having their data collected for research purposes. The issue of consent for social media data remains an ongoing debate among researchers. In this blog post, we tackle a much smaller piece of this puzzle, focusing on the research ethics but not the legal aspects of this issue: how should researchers approach consent when including screenshots of user-generated social media posts in research papers? Because analysis of visual social media content is a growing research area, it is important to identify research ethics guidelines.

We first discuss a few approaches to using user-generated social media images ethically in research papers. In a 2016 paper that we co-authored, we used screenshots from Instagram, Tumblr, and Twitter to exemplify our characterizations of eating disorder presentation online (Pater, Haimson, Andalibi, & Mynatt, 2016). Though these images were posted publicly, we felt uncomfortable using them in our research paper without consent from the posters. We used an opt-out strategy, in which we included content in the paper as long as people did not explicitly opt out. We contacted 17 people using the messaging systems on the social media site where the content appeared, gave them a brief description of the research project, and explained that they could opt out of their post being presented in the paper by responding to the message. We sent these messages in May 2015, and intended to remove people’s images from the paper if they responded before the paper’s final submission for publication five months later in October 2015. Out of the 17 people that we contacted, three people gave explicit permission to use their images in the paper, and the remaining 14 did not respond. Though this was sensitive content due to the eating disorder context, it did not include any identifiable pictures (e.g. a poster’s face) or usernames. While we were not entirely comfortable using content from the 14 people who did not give explicit permission, this seemed to be in line with ethical research practices within our research community (e.g. (Chancellor, Lin, Goodman, Zerwas, & De Choudhury, 2016), who did not receive users’ consent to use images, but did blur any identifiable features). We ultimately decided that including the images did more good than harm, considering that our paper contributed an understanding of online self-presentation for a marginalized population, which could have important clinical and technological implications. Another paper (Andalibi, Ozturk, & Forte, 2017) took a different approach to publishing user-generated visual content. Because the authors had no way of contacting posters, they instead created a few example posts themselves, which included features similar but not identical to the images in the dataset, to communicate the type of images they referenced in the paper. This is similar to what Markham (2012) calls “fabrication as ethical practice.”

This opt-out approach is only ethical in certain cases. For instance, it is not in line with the Australian National Statement on Ethical Conduct in Human Research (National Health and Medical Research Council, 2012), which we assume was not written with social media researchers as its primary audience. NHMRC’s Chapter 2.3 states that an opt-out approach is only ethical “if participants receive and read the information provided.” In a social media context, people may not necessarily receive and read information messaged to them. Additionally, researchers and ethics committees may not agree on whether or not these people are “participants” or whether such a study constitutes human subjects research. When using non-identifiable images, as we did in our study described above, and when the study’s benefit outweighs potential harm done to those who posted the social media content, we argue that an opt-out approach is appropriate. However, an opt-out approach becomes unethical when sensitive, personally-identifiable images are included in a research paper, as we discuss next.

While issues of consent when using social media content in research papers remains a thorny ongoing discussion, in certain instances we believe researchers’ decisions are more clear-cut. If social media content is identifiable – that is, if the poster’s face and/or name appears in the post – researchers should either get explicit consent from that person, de-identify the image (such as by blurring the photo and removing the name), or use ethical fabrication (Markham, 2012). Particularly, we strongly argue that when dealing with sensitive contexts, such as stigmatized identities or health issues, a person’s face and name should not be used without permission. As an example, let’s say that a woman posts a picture of herself using the hashtag #IHadAnAbortion in a public Twitter post. A researcher may argue that this photo is publicly available and thus is also available to copy and paste into a research paper. However, this ignores the post’s contextual integrity (Nissenbaum, 2009): when taking the post out of its intended context (a particular hashtag on Twitter), the researcher fundamentally changes the presentation and the meaning of the post. Additionally, on Twitter, the poster has the agency to delete[1] the post at her discretion, a freedom that she loses when it becomes forever embedded into a research paper and all of the digital and physically distributed copies of that paper. Thus, we argue that when including identifiable social media data in papers, researchers should be obligated to receive explicit permission from the person who posted that content, should they wish to include that image in the paper.

[1] Though all tweets are archived by the Library of Congress and thus not fully deletable, they are not readily accessible by the public, and even by most researchers. Furthermore, Twitter’s Terms of Service require those who collect data to periodically check for and remove deleted tweets from their datasets, though it is not clear whether this applies to the Library of Congress (Twitter, n.d.).

References:

Andalibi, N., Ozturk, P., & Forte, A. (2017). Sensitive Self-disclosures, Responses, and Social Support on Instagram: The Case of #Depression. In Proceedings of the 20th ACM Conference on Computer-Supported Cooperative Work & Social Computing. New York, NY, USA: ACM. http://dx.doi.org/10.1145/2998181.2998243

Chancellor, S., Lin, Z., Goodman, E. L., Zerwas, S., & De Choudhury, M. (2016). Quantifying and Predicting Mental Illness Severity in Online Pro-Eating Disorder Communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1171–1184). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2819973

Fiesler, C., Young, A., Peyton, T., Bruckman, A. S., Gray, M., Hancock, J., & Lutters, W. (2015). Ethics for Studying Online Sociotechnical Systems in a Big Data World. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing (pp. 289–292). New York, NY, USA: ACM. https://doi.org/10.1145/2685553.2685558

Markham, A. (2012). Fabrication as Ethical Practice. Information, Communication & Society, 15(3), 334–353. https://doi.org/10.1080/1369118X.2011.641993

National Health and Medical Research Council. (2012, February 10). Chapter 2.3: Qualifying or waiving conditions for consent. Retrieved December 13, 2016, from https://www.nhmrc.gov.au/book/national-statement-ethical-conduct-human-research-2007-updated-december-2013/chapter-2-3-qualif

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.

Pater, J. A., Haimson, O. L., Andalibi, N., & Mynatt, E. D. (2016). “Hunger Hurts but Starving Works”: Characterizing the Presentation of Eating Disorders Online. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1185–1200). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2820030 Twitter. (n.d.). Developer Agreement & Policy —

Twitter Developers. Retrieved December 13, 2016, from https://dev.twitter.com/overview/terms/agreement-and-policy

The contributors:
Oliver L. Haimson (University of California, Irvine) – EmailBio
Nazanin Andalibi (Drexel University) – Bio
Jessica Pater (Georgia Institute of Technology) – Bio

This post may be cited as:
Haimson O, Andalibi N and Pater J. (2016, 20 December) Ethical use of visual social media content in research publications. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/uncategorized/ethical-use-visual-social-media-content-research-publications

Applying Place to Research Ethics and Cultural Competence Training0

 

In the 1990s, I worked with many community groups and Native American/African-American communities on the difficult challenges of understanding environmental health risks from low-level radiation contamination. These place-based communities and cultural groups were downwind from nuclear weapons production facilities which had massive deliberate and accidental releases of radiation since their operations began during and after World War II. In the health organizing work I had conducted, I was not aware of the potential of research ethics guidelines to bring more beneficence and protection to these populations and their geographic communities. Soon after formal ethical investigations produced findings of cultural ignorance and a lack of knowledge of research ethics by many researchers involved with human radiation experiments, I decided to pursue doctoral studies to promote ethical protections for place-based communities. After receiving my PhD and doing some extensive studies of bioethical principles and their potential to be applied to groups/communities and place, I have been able to publish new studies/practices in this area. With much support both from National Institute for Health and the National Science Foundation and their grant programs on research ethics training, I have worked with several collaborators to promote research ethics training for graduate students in environmental health/sciences, natural resource sciences and engineering (Quigley et al 2015, see NEEP website http://www.brown.edu/research/research-ethics/neep).

In this blog, I provide a discussion of human subjects protections being extended to the protection of the spatial setting, the place-based identities and meanings of individual and group human subjects in their local communities. In a recent paper (Quigley 2016), I argued for this protection both from recommendations that already exist in bioethical guidelines (National Bioethics Advisory Committee (NBAC) and Council for the International Organization of Medical Sciences (CIOMS) and from field studies that demonstrate important lessons for protection of place and place-based identities. The bioethical principles of beneficence, nonmaleficence, respect for persons/respect for communities and justice are reviewed in this article with detailed guidance about each principle as it relates to protecting place and place-based identities.

  • Regulatory guidance exists in terms of the need for researchers to provide benefits to researched populations, to reduce exploitation particularly to racial/cultural and resource-poor groups who are vulnerable subjects, and to allow community consultation on the risks and benefits of research designs. Many resource-poor and politically powerless communities are directly dependent on the subsistence resources of their local spatial settings. Research interventions should not harm these conditions but instead produce beneficial change. Reasonably available benefits should be determined with local representatives (health care providers, community representatives, advocacy groups, scientists and government officials). Such consultation will help to reduce harms, particularly relevant to indigenous groups when the social risks of research can cause disrespect of cultural beliefs, traditions, world views, the violation of local protocols, social stigmatization, and discriminatory harms. For example, in studies of landscape planning, academic researchers co-collaborated with Native community leaders to adopt community-based designs on walking/bike paths, community gardens, mixed use and conservation with housing needs (Thering 2011). Dangles et al (2010) worked with community consultation to ensure that environmental monitoring for control of pests in Andean potato farming and for climate and soil conditions was conducted with community members and particularly with the youth who received training on monitoring technologies which helped to improve youth training opportunities and reduce youth migration. With community collaboration, local community-based benefits can be identified and integrated into technical research plans to improve beneficence.

, I have described how research interventions with cultural groups do require a deep study and practice of an “environmental” cultural competence by researchers, particularly for place-based identities, meanings and past conditions (Quigley 2016b).

There are abundant field studies on new participatory approaches to field research with local communities (see Bibliographies on NEEP website), many of which incorporate collaborative learning about place-based meanings which then lead to research designs which produce local benefits along with technical research activities (capacity-building, skills development, youth outreach, access to critical services, local knowledge guidance about local conditions/resources) These community-based and culturally-competent interventions help to promote the “justice” principle, achieving fair representation, recruitment and fair benefits/burdens for these place-based settings. IRBs are learning more about social risks and community-based protections to ensure more fair treatment, fair benefits and to reduce unintended harms to researched communities.

References

Quigley, D. D. Sonnenfeld, P. Brown, L. Silka, Q. Tian. L. He. Research Ethics Training on Place-based Communities and Cultural Groups. Journal of Environmental Studies and Sciences, DOI 10.1007/s13412-015-0236-x , published online, March 29, 2015.

Quigley, D. (2016a) Applying Place to Research Ethics and Cultural Competence/Humility Training. Journal of Academic Ethics, published online 13 January, Springer

Quigley, D. (2016b) “Building Cultural Competence in Environmental Studies and Natural Resource Sciences”. Society and Natural Resources, 29:6, 725-737.

Contributor
Dianne Quigley, PhD is an Adjunct Assistant Professor at Brown University’s Science and Technology Studies Program and can be contacted at Dianne_Quigley_1@brown.edu

This post may be cited as:
Quigley D. (2016, 22 August) Applying Place to Research Ethics and Cultural Competence Training.Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/applying-place-research-ethics-cultural-competence-training

Technology research in sensitive settings: A workshop on ethical encounters in HCI1

 

In May this year, a group of researchers gathered in San Jose, California, to attend a workshop on “Ethical Encounters in HCI”. HCI is human-computer interaction, an interdisciplinary field of research that covers a broad spectrum of activities, ranging from ethnographic research that aims to understand people to inform design, to lab-based studies that aim to develop and evaluate new technologies.

Why worry about ethics in HCI?

The field of human-computer interaction emerged in the 1980s, when personal computing was in its infancy. This was a time when computers sat on desktops, usually in the workplace. Initially, the aim of this nascent field of research was to create usable and efficient systems that supported people’s work activities. Much of the work in HCI at the time was conducted in laboratory settings or the workplace, with an emphasis on reducing errors and improving efficiency as people – or ‘users’ – learnt to perform tasks using computers.

Fastforward 30-plus years and computing has moved off the desktop and expanded into every realm of our lives. HCI, too, has expanded. No longer confined to the office or laboratory, HCI research has moved into the home and beyond, into settings where doing “ethical research” means more than getting your participants to sign a consent form (Bruckman, 2014). It is not unusual now for HCI researchers to conduct fieldwork in places like hospitals, schools, and residential care facilities, and to work closely with participants who might be considered vulnerable, such as people experiencing homelessness, chronic illness, or recent bereavement. Research in these settings can be rewarding and valuable, but also fraught with concerns about how to ensure the research is conducted in an ethical manner. In these settings, we can’t always predict and plan for every contingency, and there is not always a clear right or wrong way to proceed when researchers encounter a dilemma (Munteanu et al, 2015). In addition, HCI research might involve not only working closely with people to understand their lives, but also designing and implementing new technologies. We cannot always predict the impact these technologies will have on people’s lives and we have to be especially mindful of the possibility of unexpected negative effects when working in sensitive settings (Waycott et al, 2015). Social media, too, has highlighted the complexity of ethics in HCI and technology research; many researchers are now using publicly available social media posts as research data, sometimes to explore sensitive topics.

Workshop outcomes

With these challenges in mind, we gathered in San Jose to discuss the common ethical issues people have faced when doing this research and to explore possible ways of addressing these issues in the future. The workshop, held as part of the International Conference on Human Factors in Computing Systems (CHI 2016), brought together HCI researchers working in sensitive and difficult settings who wanted to communally reflect on ethical issues they had encountered in their work.

Participants included a PhD student working on designing information systems for families of children in palliative care, researchers whose work aims to preserve the “voices from the Rwanda trial” in post-genocide Rwanda, and crisis informatics researchers who analysed Twitter posts to understand the role of social media during Hurricane Sandy. Prior to the workshop, participants submitted position papers describing their “ethical encounters”, available here: https://ethicalencountershci.wordpress.com/chi-2016/position-papers-chi-2016/

The workshop aimed to provide these researchers with an opportunity to discuss the challenges they have faced, and to brainstorm potential “solutions” and ideas that might help HCI researchers navigate ethical issues in the future. Challenges included:

  • tensions between meeting institutional ethics review requirements and managing situational ethical issues that emerge during fieldwork;
  • managing both participants’ and researchers’ vulnerability and wellbeing;
  • the temporal nature of consent (should consent be a one-off procedure, or something that we revisit throughout the research process?);
  • managing participant and stakeholder expectations about the technologies we design and introduce;
  • deciding what happens at the end of the project, and managing expectations around this;
  • working with stakeholders, gatekeepers, organizations, and being aware of inter-organizational politics;
  • deciding who gets to participate and who doesn’t; and
  • dealing with sensitive (yet public) data that can trigger difficult responses for researchers, participants, and others exposed to the research

These challenges can occur in any research that involves fieldwork in sensitive settings; but they can be exacerbated in HCI because researchers in this field may not have been trained in dealing with these issues, and because designing and introducing technology into these settings adds a layer of complexity to the research.

The workshop participants identified a number of ways of providing support to HCI researchers in the future. Suggestions included looking to other disciplines (e.g., anthropology, sociology) to see what lessons we can take from them; gathering together resources and cases from previous projects (e.g., building a database of consent forms and other documents); and developing a professional advisory group to provide guidance and to promote consideration of research ethics within the HCI community. Some of these suggestions are already being achieved through initiatives like AHRECS.

References

Bruckman, A. (2014). Research Ethics and HCI. In J. S. Olson and W. A. Kellogg (Eds). Ways of Knowing in HCI. Springer

Munteanu, C., Molyneaux, H., Moncur, W., Romero, M., O’Donnell, S., & Vines, J. (2015). Situational ethics: Re-thinking approaches to formal ethics requirements for human-computer interaction Proc. CHI 2015 (pp. 105-114): ACM Press.

Waycott, J., Wadley, G., Schutt, S., Stabolidis, A., & Lederman, R. (2015). The challenge of technology research in ‘sensitive HCI’. Paper presented at the OzCHI 2015, Melbourne, Australia.

Workshop information:

https://ethicalencountershci.wordpress.com/

Waycott, J., Munteanu, C., Davis, H., Thieme, A., Moncur, W., McNaney, R., . . . Branham, S. (2016). Ethical Encounters in Human-Computer Interaction. Paper presented at the Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems.

Contributor
Jenny Waycott is a Lecturer in the Department of Computing and Information Systems at the University of Melbourne. After completing her PhD at the Institute of Educational Technology, The Open University UK, Dr Waycott has worked on several projects in the fields of human-computer interaction and educational technology. Her research is broadly concerned with understanding the role technologies play in people’s learning, work, and social activities. Her recent work has focused on the design and use of social technologies for/with older adults, ethical issues in the design and use of new technologies in sensitive settings, creative uses of new technologies for social inclusion, and the use of social technologies in higher education. For more information see: http://www.jwaycott.com/
jwaycott@unimelb.edu.au

This post may be cited as:
Waycott J. (2016, 29 July) Technology research in sensitive settings: A workshop on ethical encounters in HCI’. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/technology-research-sensitive-settings-workshop-ethical-encounters-hci

When is research not research?0

 

Most institutions have processes for differentiating between Quality Assurance/Quality Improvement (QA/QI) activities and those that can be considered to be research. Unfortunately, much of the debate about which is which has been driven by regulatory needs, as a categorization of QA/QI leads to a project not requiring ethics committee review, a preference for many where the low risk pathway is still considered burdensome. Avoidance of ethics review for bureaucratic reasons though is a less than satisfactory motive.

In large scale genomics projects a vast amount of the work being done is in the enabling technologies, that is, the sequencing itself as well as the computational methodologies that are at the heart of the bioinformatics that makes sense of the vast quantities of raw data generated. To develop robust and reliable informatics approaches one can run simulations but ultimately they must be done on real data to ensure they are fit for purpose. The question arises then, is using the data generated from a person’s cancer as well as their normal DNA sequence for the purposes of establishing valid computational tools research? On this topic Joly et al (EJHG 2016) provide a perspective with regard to the International Cancer Genome Consortium (ICGC), which has sequenced more than 10000 patient’s cancers from across 17 jurisdictions. The authors of the paper, of which I am one, are members of the ICGC Ethics and Policy Committee (EPC), which provides advice to ICGC member jurisdictions on matters of ethics relating to the program.

Using two activities, both of which are effectively a means to benchmark how variants and mutations are identified in the genome, we explored how a variety of international jurisdictions viewed the activity and whether they were helpful in defining whether it was a QA/QI activity or one that was more properly regarded as research. Both were identified as having potential risks to confidentiality and both wished to publish their findings. For these reasons they ended up being called ‘research’ and underwent appropriate review. However, recognizing that this may create hurdles for such work that are disproportionate to the true risk of the activity, we reviewed jurisdictional approaches to this topic as well as the literature to see if a more helpful framework could be established to guide appropriate review.

The exercise proved particularly useful as it shone a critical light on some of the more widely used criteria, such as generalizability, which whilst being used by many organizations and jurisdictions as a key distinction between research and QA/QI is in fact a flawed criterion if not used carefully. In contrast, risk to a participant stands up as an important factor that must be evaluated in all activities. Four other criteria (novelty of comparison, speed of implementation, methodology, and scope of involvement), were also reviewed for their utility in developing a useful algorithm for triaging an appropriate review pathway.

The paper proposes that a two step process be implemented in which the six identified criteria are first used to determine whether a project is more QA/QI, research or has elements of both, followed by a risk-based assessment process to determine which review pathways is used. Expedited review, or exemption from review, are options for very low risk projects but, as the paper highlighted from a review of the pathways in four ICGC member countries (UK, USA, Canada and Australia), there is no consensus on how to apply this. The challenge therefore remains establishing more uniformity between jurisdictions on the policies that apply to risk-based evaluation of research. Nevertheless, simple categorization into QA/QI vs Research is not particularly useful and a greater emphasis on evaluation based on criteria that define risk of harm to participants is the way forward.

Further reading

Joly Y, So D, Osien G, Crimi L, Bobrow M, Chalmers D, Wallace S E, Zeps N and Knoppers B (2016) A decision tool to guide the ethics review of a challenging breed of emerging genomic projects. European Journal of Human Genetics advance. Online publication. doi:10.1038/ejhg.2015.279
Publisher: http://www.nature.com/ejhg/journal/vaop/ncurrent/full/ejhg2015279a.html
ResearchGate: https://www.researchgate.net/publication/291341753_A_decision_tool_to_guide…

NHMRC (2014) Ethical Considerations in Quality Assurance and Evaluation Activities http://www.nhmrc.gov.au/guidelines-publications/e111

Contributor
Dr. Nik Zeps
Dr. Zeps is Director of Research at St John of God Subiaco, Murdoch and Midland Hospitals. He was a member of the Australian Health Ethics Committee from 2006-2012 and the Research Committee of the National Health and Medical Research Council (NHMRC) from 2009-2015. He is a board member of the Australian Clinical Trials Alliance and co-chair of the international Cancer Genome Consortium communication committee. His objective as Director of Research is to integrate clinical research and teaching into routine healthcare delivery to improve the lives of patients and their families.
Nikolajs.Zeps@sjog.org.au

This post may be cited as: Zeps N. (2016, 30 June) When is research not research?. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/human-research-ethics/research-not-research

0