ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Privacy

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Ethical use of visual social media content in research publications2

 

At a research ethics workshop at the 2015 CSCW conference (Fiesler et al., 2015), researchers in our community respectfully disagreed about using public social media data for research without the consent of those who had posted the material. Some argued that researchers had no obligation to gain consent from each person whose data appeared in a public social media dataset. Others contended that, instead, people should have to explicitly opt in to having their data collected for research purposes. The issue of consent for social media data remains an ongoing debate among researchers. In this blog post, we tackle a much smaller piece of this puzzle, focusing on the research ethics but not the legal aspects of this issue: how should researchers approach consent when including screenshots of user-generated social media posts in research papers? Because analysis of visual social media content is a growing research area, it is important to identify research ethics guidelines.

We first discuss a few approaches to using user-generated social media images ethically in research papers. In a 2016 paper that we co-authored, we used screenshots from Instagram, Tumblr, and Twitter to exemplify our characterizations of eating disorder presentation online (Pater, Haimson, Andalibi, & Mynatt, 2016). Though these images were posted publicly, we felt uncomfortable using them in our research paper without consent from the posters. We used an opt-out strategy, in which we included content in the paper as long as people did not explicitly opt out. We contacted 17 people using the messaging systems on the social media site where the content appeared, gave them a brief description of the research project, and explained that they could opt out of their post being presented in the paper by responding to the message. We sent these messages in May 2015, and intended to remove people’s images from the paper if they responded before the paper’s final submission for publication five months later in October 2015. Out of the 17 people that we contacted, three people gave explicit permission to use their images in the paper, and the remaining 14 did not respond. Though this was sensitive content due to the eating disorder context, it did not include any identifiable pictures (e.g. a poster’s face) or usernames. While we were not entirely comfortable using content from the 14 people who did not give explicit permission, this seemed to be in line with ethical research practices within our research community (e.g. (Chancellor, Lin, Goodman, Zerwas, & De Choudhury, 2016), who did not receive users’ consent to use images, but did blur any identifiable features). We ultimately decided that including the images did more good than harm, considering that our paper contributed an understanding of online self-presentation for a marginalized population, which could have important clinical and technological implications. Another paper (Andalibi, Ozturk, & Forte, 2017) took a different approach to publishing user-generated visual content. Because the authors had no way of contacting posters, they instead created a few example posts themselves, which included features similar but not identical to the images in the dataset, to communicate the type of images they referenced in the paper. This is similar to what Markham (2012) calls “fabrication as ethical practice.”

This opt-out approach is only ethical in certain cases. For instance, it is not in line with the Australian National Statement on Ethical Conduct in Human Research (National Health and Medical Research Council, 2012), which we assume was not written with social media researchers as its primary audience. NHMRC’s Chapter 2.3 states that an opt-out approach is only ethical “if participants receive and read the information provided.” In a social media context, people may not necessarily receive and read information messaged to them. Additionally, researchers and ethics committees may not agree on whether or not these people are “participants” or whether such a study constitutes human subjects research. When using non-identifiable images, as we did in our study described above, and when the study’s benefit outweighs potential harm done to those who posted the social media content, we argue that an opt-out approach is appropriate. However, an opt-out approach becomes unethical when sensitive, personally-identifiable images are included in a research paper, as we discuss next.

While issues of consent when using social media content in research papers remains a thorny ongoing discussion, in certain instances we believe researchers’ decisions are more clear-cut. If social media content is identifiable – that is, if the poster’s face and/or name appears in the post – researchers should either get explicit consent from that person, de-identify the image (such as by blurring the photo and removing the name), or use ethical fabrication (Markham, 2012). Particularly, we strongly argue that when dealing with sensitive contexts, such as stigmatized identities or health issues, a person’s face and name should not be used without permission. As an example, let’s say that a woman posts a picture of herself using the hashtag #IHadAnAbortion in a public Twitter post. A researcher may argue that this photo is publicly available and thus is also available to copy and paste into a research paper. However, this ignores the post’s contextual integrity (Nissenbaum, 2009): when taking the post out of its intended context (a particular hashtag on Twitter), the researcher fundamentally changes the presentation and the meaning of the post. Additionally, on Twitter, the poster has the agency to delete[1] the post at her discretion, a freedom that she loses when it becomes forever embedded into a research paper and all of the digital and physically distributed copies of that paper. Thus, we argue that when including identifiable social media data in papers, researchers should be obligated to receive explicit permission from the person who posted that content, should they wish to include that image in the paper.

[1] Though all tweets are archived by the Library of Congress and thus not fully deletable, they are not readily accessible by the public, and even by most researchers. Furthermore, Twitter’s Terms of Service require those who collect data to periodically check for and remove deleted tweets from their datasets, though it is not clear whether this applies to the Library of Congress (Twitter, n.d.).

References:

Andalibi, N., Ozturk, P., & Forte, A. (2017). Sensitive Self-disclosures, Responses, and Social Support on Instagram: The Case of #Depression. In Proceedings of the 20th ACM Conference on Computer-Supported Cooperative Work & Social Computing. New York, NY, USA: ACM. http://dx.doi.org/10.1145/2998181.2998243

Chancellor, S., Lin, Z., Goodman, E. L., Zerwas, S., & De Choudhury, M. (2016). Quantifying and Predicting Mental Illness Severity in Online Pro-Eating Disorder Communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1171–1184). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2819973

Fiesler, C., Young, A., Peyton, T., Bruckman, A. S., Gray, M., Hancock, J., & Lutters, W. (2015). Ethics for Studying Online Sociotechnical Systems in a Big Data World. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing (pp. 289–292). New York, NY, USA: ACM. https://doi.org/10.1145/2685553.2685558

Markham, A. (2012). Fabrication as Ethical Practice. Information, Communication & Society, 15(3), 334–353. https://doi.org/10.1080/1369118X.2011.641993

National Health and Medical Research Council. (2012, February 10). Chapter 2.3: Qualifying or waiving conditions for consent. Retrieved December 13, 2016, from https://www.nhmrc.gov.au/book/national-statement-ethical-conduct-human-research-2007-updated-december-2013/chapter-2-3-qualif

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.

Pater, J. A., Haimson, O. L., Andalibi, N., & Mynatt, E. D. (2016). “Hunger Hurts but Starving Works”: Characterizing the Presentation of Eating Disorders Online. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1185–1200). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2820030 Twitter. (n.d.). Developer Agreement & Policy —

Twitter Developers. Retrieved December 13, 2016, from https://dev.twitter.com/overview/terms/agreement-and-policy

The contributors:
Oliver L. Haimson (University of California, Irvine) – EmailBio
Nazanin Andalibi (Drexel University) – Bio
Jessica Pater (Georgia Institute of Technology) – Bio

This post may be cited as:
Haimson O, Andalibi N and Pater J. (2016, 20 December) Ethical use of visual social media content in research publications. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/uncategorized/ethical-use-visual-social-media-content-research-publications

Technology research in sensitive settings: A workshop on ethical encounters in HCI1

 

In May this year, a group of researchers gathered in San Jose, California, to attend a workshop on “Ethical Encounters in HCI”. HCI is human-computer interaction, an interdisciplinary field of research that covers a broad spectrum of activities, ranging from ethnographic research that aims to understand people to inform design, to lab-based studies that aim to develop and evaluate new technologies.

Why worry about ethics in HCI?

The field of human-computer interaction emerged in the 1980s, when personal computing was in its infancy. This was a time when computers sat on desktops, usually in the workplace. Initially, the aim of this nascent field of research was to create usable and efficient systems that supported people’s work activities. Much of the work in HCI at the time was conducted in laboratory settings or the workplace, with an emphasis on reducing errors and improving efficiency as people – or ‘users’ – learnt to perform tasks using computers.

Fastforward 30-plus years and computing has moved off the desktop and expanded into every realm of our lives. HCI, too, has expanded. No longer confined to the office or laboratory, HCI research has moved into the home and beyond, into settings where doing “ethical research” means more than getting your participants to sign a consent form (Bruckman, 2014). It is not unusual now for HCI researchers to conduct fieldwork in places like hospitals, schools, and residential care facilities, and to work closely with participants who might be considered vulnerable, such as people experiencing homelessness, chronic illness, or recent bereavement. Research in these settings can be rewarding and valuable, but also fraught with concerns about how to ensure the research is conducted in an ethical manner. In these settings, we can’t always predict and plan for every contingency, and there is not always a clear right or wrong way to proceed when researchers encounter a dilemma (Munteanu et al, 2015). In addition, HCI research might involve not only working closely with people to understand their lives, but also designing and implementing new technologies. We cannot always predict the impact these technologies will have on people’s lives and we have to be especially mindful of the possibility of unexpected negative effects when working in sensitive settings (Waycott et al, 2015). Social media, too, has highlighted the complexity of ethics in HCI and technology research; many researchers are now using publicly available social media posts as research data, sometimes to explore sensitive topics.

Workshop outcomes

With these challenges in mind, we gathered in San Jose to discuss the common ethical issues people have faced when doing this research and to explore possible ways of addressing these issues in the future. The workshop, held as part of the International Conference on Human Factors in Computing Systems (CHI 2016), brought together HCI researchers working in sensitive and difficult settings who wanted to communally reflect on ethical issues they had encountered in their work.

Participants included a PhD student working on designing information systems for families of children in palliative care, researchers whose work aims to preserve the “voices from the Rwanda trial” in post-genocide Rwanda, and crisis informatics researchers who analysed Twitter posts to understand the role of social media during Hurricane Sandy. Prior to the workshop, participants submitted position papers describing their “ethical encounters”, available here: https://ethicalencountershci.wordpress.com/chi-2016/position-papers-chi-2016/

The workshop aimed to provide these researchers with an opportunity to discuss the challenges they have faced, and to brainstorm potential “solutions” and ideas that might help HCI researchers navigate ethical issues in the future. Challenges included:

  • tensions between meeting institutional ethics review requirements and managing situational ethical issues that emerge during fieldwork;
  • managing both participants’ and researchers’ vulnerability and wellbeing;
  • the temporal nature of consent (should consent be a one-off procedure, or something that we revisit throughout the research process?);
  • managing participant and stakeholder expectations about the technologies we design and introduce;
  • deciding what happens at the end of the project, and managing expectations around this;
  • working with stakeholders, gatekeepers, organizations, and being aware of inter-organizational politics;
  • deciding who gets to participate and who doesn’t; and
  • dealing with sensitive (yet public) data that can trigger difficult responses for researchers, participants, and others exposed to the research

These challenges can occur in any research that involves fieldwork in sensitive settings; but they can be exacerbated in HCI because researchers in this field may not have been trained in dealing with these issues, and because designing and introducing technology into these settings adds a layer of complexity to the research.

The workshop participants identified a number of ways of providing support to HCI researchers in the future. Suggestions included looking to other disciplines (e.g., anthropology, sociology) to see what lessons we can take from them; gathering together resources and cases from previous projects (e.g., building a database of consent forms and other documents); and developing a professional advisory group to provide guidance and to promote consideration of research ethics within the HCI community. Some of these suggestions are already being achieved through initiatives like AHRECS.

References

Bruckman, A. (2014). Research Ethics and HCI. In J. S. Olson and W. A. Kellogg (Eds). Ways of Knowing in HCI. Springer

Munteanu, C., Molyneaux, H., Moncur, W., Romero, M., O’Donnell, S., & Vines, J. (2015). Situational ethics: Re-thinking approaches to formal ethics requirements for human-computer interaction Proc. CHI 2015 (pp. 105-114): ACM Press.

Waycott, J., Wadley, G., Schutt, S., Stabolidis, A., & Lederman, R. (2015). The challenge of technology research in ‘sensitive HCI’. Paper presented at the OzCHI 2015, Melbourne, Australia.

Workshop information:

https://ethicalencountershci.wordpress.com/

Waycott, J., Munteanu, C., Davis, H., Thieme, A., Moncur, W., McNaney, R., . . . Branham, S. (2016). Ethical Encounters in Human-Computer Interaction. Paper presented at the Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems.

Contributor
Jenny Waycott is a Lecturer in the Department of Computing and Information Systems at the University of Melbourne. After completing her PhD at the Institute of Educational Technology, The Open University UK, Dr Waycott has worked on several projects in the fields of human-computer interaction and educational technology. Her research is broadly concerned with understanding the role technologies play in people’s learning, work, and social activities. Her recent work has focused on the design and use of social technologies for/with older adults, ethical issues in the design and use of new technologies in sensitive settings, creative uses of new technologies for social inclusion, and the use of social technologies in higher education. For more information see: http://www.jwaycott.com/
jwaycott@unimelb.edu.au

This post may be cited as:
Waycott J. (2016, 29 July) Technology research in sensitive settings: A workshop on ethical encounters in HCI’. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/technology-research-sensitive-settings-workshop-ethical-encounters-hci

We would all benefit from more research integrity research1

 

Paul M Taylor1 and Daniel P Barr2

1Director, Research Integrity, Governance and Systems
Research and Innovation, RMIT University (paul.taylor@rmit.edu.au)

2Acting Director, Office for Research Ethics and Integrity
Research, Innovation and Commercialisation, The University of Melbourne (dpbarr@unimelb.edu.au)

We need more research into research integrity, research misconduct and peer review. This is not a controversial statement, and few would argue against it. So, this is a short blog post then…

It’s worth thinking about why we think that more research into these areas is important and needed. The research that has been reported in the literature is valuable to us and has produced some fascinating insights. We see differences in attitudes in different countries and career stages, and evidence about the impacts of research misconduct. Like all good research, the material already in the literature prompts us to ask more questions than it answers.

But, do we think that the same surveys about the incidence of research misconduct or attitudes to research integrity would reveal the same results for humanities and social science researchers as those in STEM disciplines? Are biomedical researchers in Australia or the UK as likely or more likely to commit research misconduct? Do RCR training packages help prevent misconduct? Is this even what we want RCR training to do? How do we best design and implement research integrity policies? Are principles really better than rules in this context? There’s a handful of grant applications right there!

Perhaps a research integrity ecosystem view would help. What are the challenges that some of the key stakeholders in research integrity are facing and how could research help?

We can start close to home by thinking about the role of institutions in research integrity. The most obvious role of institutions in this area is in responding to allegations of research misconduct. This role is entirely reasonable because of the nature of the relationship between researchers and their workplaces – employment contracts can compel people to provide evidence, and institutions may have better access to data and records that can make the difference in allegations being properly resolved. Certainly compared to other players, institutions are in the best position to consider concerns about the integrity of research. We know that there is not uniformity though in the way institutions respond. Our friends at COPE have talked about the difficulty that publishers face in sometimes even identifying a place to direct concerns. What’s the opportunity for research here? Analysis of institutions to identify traits that are found in ‘good responders’ would help those institutions trying to improve their operations in this area. How critical is the role of senior leadership? What are the impacts, at an institutional level, of a high profile or public misconduct case? How does this impact differ for highly-ranked, ‘too big to fall’ institutions compared with younger organisations? What are the factors that people see that makes them think an institution produces responsible and trustworthy research (if the institution plays that much of a role at all)?

This leads to a second and equally important role for institutions in promoting the importance of responsible and ethical research. It extends way beyond compliance (although this is obviously important). The products of research, as many and varied as they are, must be trustworthy because of the positive impacts that we all hope research will have. So, if an institution decided it wanted to revamp its research governance framework or Code of Conduct for Research, what should it focus on? What evidence do we have, in the research context, to support the idea of Codes of Conduct? Are high-level, principles-based documents that cover most research disciplines useful or are more discipline-focussed rules-based governance structures more effective? How do institutions best engender a strong culture of research integrity?

The role of training here is intuitive and probably right, but can we show that this makes a difference and results in more trustworthy, higher quality research, or does it just make us feel better? Publishers and funders too could benefit from the added insights that research would reveal. Perhaps for both of these players, understanding better the pitfalls of peer review, or development of rigorous alternative models? Research into peer review is already happening, but there could and should be more. What is the best way to distribute mostly decreasing pools of funds to highly competitive funding applicants? How consistent is the decision-making of grant review panels or journal editors? How influential are locations or institutions and ‘big names’ on manuscript or grant review processes and should all reviews be double-blind? Decisions based on peer review are intrinsic and integral to the research process. We should thoroughly understand how these processes are working and what we should do to try and make them work better.

The final group to talk about here are the researchers themselves, perhaps the most important part of the research integrity ecosystem. Given an opportunity, most researchers enjoy talking about the way research works and their own research practice. Listening to conversations between microbiologists and historians about publication rates and funding challenges, data generation and curation, and team research or sole-trader models is intriguing and very interesting. Research about attitudes towards research integrity and how it fits (or doesn’t fit) the way researchers do their research would be valuable. Fundamentally, researchers critically assess new or existing information to find new ideas or solutions. It should come as no surprise when the same critical assessment is applied to proposals for them to reconsider the way they do their research. ‘Research integrity research’ would help to support changes in behaviour that increase the trustworthiness and quality of research. This is really the goal of research integrity.

There’s no shortage of questions to answer. There’s growing awareness of research integrity as a discipline in it’s own right (perhaps it the ultimate interdisciplinary research area). There’s new places for this research to be found (like Research Integrity and Peer Review). The benefits are compelling and clear. What are we waiting for? *Paul is a member of the Editorial Board of Research Integrity and Peer Review. Aside from that, neither Paul nor Dan have any conflicts of interest to disclose, but they hope to in the near future.

This blog may be cited as:
Taylor P and Barr DP. (2016, 10 May) We would all benefit from more research integrity research. Research Ethics Monthly. Retrieved from https://ahrecs.com/research-integrity/benefit-research-integrity-research

A Note on the Importance of Sensitising the Novice Researcher to the Realities of Ethics in Practice0

 

Discussions of research ethics have begun to centre increasingly on how research guidelines translate into ethical practice during the research process. In the paper which prompted the invitation to contribute to this blog (McEvoy, Enright & MacPhail, 2015), my experiences as a novice researcher conducting focus group interviews with a group of young people are illustrated and discussed. The consequence of a limited experiential base in research and not having previously read deeply on the topic of research ethics was that I encountered difficulties in recognising or determining the best course of action when faced with what Guillemin and Gillam (2004, p. 263) refer to as ‘ethically important moments’ in the research situation.

It is clear that unless researchers are sensitised to how research practices such as confidentiality, informed consent, etc. manifest in research encounters, on-the-spot decisions can test the veracity of a research project’s ethical promises. I am certainly not suggesting that experienced researchers hold the monopoly on research ethics, or that it is not possible for novice researchers to behave ethically. Rather, due to the immediacy of ethically important moments it is often a researcher’s instincts or reflexes which are operative. Therefore, just as when we learn any skill and certain elements become automatic with experience, it is important that researchers starting out on their careers are given every opportunity to develop and challenge their ethical practice in a way that ensures that those elements of their practice which become ingrained have the best chance of being ethically sound.

In reflecting upon the ethically important moments I encountered, and in reading the associated literature, I certainly improved my ethical sensitivity and understanding of how ethics are enacted in practice. However, from the perspective of the research participants in the given project, it was far from ideal that my learning was the product of ethical difficulties in the field. So how might novice researchers hone their skills and reflexes without exposing research participants to the possibility of ethical breaches borne of inexperience? We may certainly begin by providing research students with a wealth of examples of ethical dilemmas, discussing our research encounters with them, what we did or didn’t do, said or didn’t say, and prompting them to question what they would do or say in the given situation. Further, we can ensure that we educate novice researchers regarding the deeper thinking behind the principles of research ethics and the various ethical stances that abound (e.g. virtue ethics, relational ethics, feminist ethics, situational ethics, etc.) so that when faced with a less clear-cut ethical dilemma they will have the resources to adapt to the context by upholding the spirit of a given principle. The immediacy of the research situation requires instant decisions but that same immediacy results in the likelihood that such decisions are in fact the result of that which comes before the research situation itself. It is perhaps in the preparation that ethics is won or lost.

References:

Guillemin, M., and Gillam, L., (2004). Ethics, reflexivity, and “ethically important moments” in research. Qualitative Inquiry, 20, 261.

McEvoy, E., Enright, E., & MacPhail, A. (2015). Negotiating ‘ethically important moments’ in research with young people: Reflections of a novice researcher, Leisure Studies, doi: 10.1080/02614367.2015.1119877

Eileen Mcevoy
PhD student at the University of Jyväskylä, Finland and also works as a research co-ordinator in Ireland. She has co-ordinated research projects at the Physical Education, Physical Activity and Youth Sport (PEPAYS Ireland) Research Centre, as well as various other Irish educational institutions.
epmcevoy@gmail.com

This blog may be cited as:
Mcevoy, E. (2016, 22 April) A Note on the Importance of Sensitising the Novice Researcher to the Realities of Ethics in Practice. Research Ethics Monthly. Retrieved from https://ahrecs.com/human-research-ethics/note-importance-sensitising-novice-researcher-realities-ethics-practice

0