ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

International

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Ethical Self-Assessment: Excellence in Reflexivity or Corporatisation Gone Mad?0

 

Research ethics and integrity have always been at the forefront of my work, not only because the issues which I explore (self-injury, disability, gender and sexuality) are personal, sensitive and often stigmatised topics, but also because as a disabled, feminist researcher I have first-hand experience of the ways in which power, inequality and appropriation are often enmeshed in research methods and outputs. Conventional ethical protocols which originate in medical guidelines struggle to fully grasp and incorporate such ethical issues, as well as the dilemmas which emerge from social research more broadly. Ethical protocols rarely prompt a researcher to critically examine how issues such as power and marginalisation play out in social research, or even how to address specific issues emerging from their own project, such as how to respond to requests for specific information as in Anne Oakley’s (1981) now infamous research with first time mothers. Ethical review more often consists of tick-box protocols, which ultimately function to restrict who and what can be researched rather than to promote ethical skills, competencies and practices (see Inckle, 2015).

This mismatch between my own ethical sensibilities and the conventions of research ethics were so vast that, during my PhD research, I struggled to conceive how any research could ever be fully ethical and I became stymied with anxiety and doubt (see Inckle, 2007). Happily, since then, I have joined a research ethics committee, taught research methods and ethics, conducted, supervised and even participated in social research. As a result, I have become more reconciled with (although no less sensitive to) the possibilities of research being both an ethical and positive experience for all those involved – albeit when based on a reflexive, ethical sensibilities rather than rigid, pre-defined protocols.

Nonetheless, when I joined my current institution and discovered that ethical review operated on a self-assessment basis http://www.lse.ac.uk/intranet/researchAndDevelopment/researchDivision/policyAndEthics/ethicsGuidanceAndForms.aspx my first response was to laugh, a lot. Isn’t the whole point of ethical review, I chortled, to provide oversight and accountability via external reviewer/s? How does simply completing a self-assessment form ensure ethical competency? Isn’t this just another example of the corporatized university gone mad, where academics take on more and more administrative duties in a role of ever-increasing responsibilities and ever-diminishing autonomy?

However, with time, reflection and some experience – all of which are important ethical competencies! – my perspective on ‘ethical self-assessment’ has radically shifted. Firstly, self-assessment is not really a full description of this ethical review process. Student researchers require formal ethical validation from their supervisor, who acts as a proxy for the institution in granting approval and, in the case of staff research projects, the line-manager takes on this role. Furthermore, in certain situations, such as when required by an external funder or participating body, the researcher is compelled to present their work before a university ethics committee proper.

Secondly, while the ethical ‘self-assessment’ form requires the respondent to answer a number of fairly standard questions about their research project – including, whether deception will be used, are the participants ‘vulnerable’, will sensitive/personal issues be explored – the process nonetheless allows for nuanced and discipline-specific accountability. For example, rather than a ‘yes’ to any of these questions rendering the research unethical and in need of redesign, the researcher is invited to complete another section of the form providing further information which contextualises the project and outlines protective protocols. What is most important, is that these justifications and protections are reviewed in a discipline specific context, thus moving the entire process away from universalised assumptions and locating it within specific field of the researcher. For example, in a medicalised context a non-clinician interviewing those who are defined as ‘vulnerable’ by virtue of their experience of disability and/or self-injury would be considered highly problematic. Similarly, an insider-researcher with shared experience of such a ‘health’ or disability experience would be considered compromised in their role and unable to ‘objectively’ and reliably conduct the research. However, from a social sciences (and rights-based) perspective, using these kind of labels to position certain individuals as compromised and/or inadequate researchers is in itself unethical and discriminatory.

Indeed, ethical ‘self-assessment’ has proven beneficial for my current research regarding the health, identity and social impacts of cycling for people with physical disabilities, including its impacts on their experience of themselves as able/disabled. In a standardised context it is likely that a number of ethical problems would be highlighted with this project: exploring sensitive issues amongst a ‘vulnerable’ group; an insider-researcher (I am a disabled cyclist); and quite possibly the assumption that the topic is so anomalous as to not justify the research at all – it is a commonplace assumption (especially among medical professionals) that people with physical disabilities cannot cycle, despite it being significantly easier than walking or wheelchair propulsion for many disabled people http://www.wheelsforwellbeing.org.uk/. However, ethical ‘self-assessment’ enabled me to position myself, my research participants and the value of the research within a critical social science and rights-based perspective which locates disability as a social identity rather than an individual vulnerability. However, this does not mean that I have avoided thinking clearly and carefully about the ethical protocols. I have taken time to consider the research, it’s potential impacts at the individual, social and policy levels, and to work to ensure that it is a positive and empowering experience for all those involved (including me). I have also developed my information, consent and researcher commitment forms in line with best practice in feminist and sensitive research (Byrne, 2000; Inckle, 2007; 2015).

Overall then, my experience suggests that my initial incredulous laughter at the thought of ethical self-assessment was misplaced. In an era of increasingly regimented ethical protocols which unilaterally apply limited, discipline-specific assumptions across the entire research community, and thereby curb the possibilities of who can conduct research, about which topics and with whom, then discipline-specific ethical self-assessment provides a new opportunity for contextualised ethical review. This kind of approach, coupled with a nuanced, reflexive approach to the development of ethical competencies could offer a significant way forward for ethical review in the social sciences.

References

Byrne, A (2000) Researching One An-Other, pp.140-166 in A Byrne and R Lentin (eds) (Re)Searching Women: Feminist Research Methods in the Social Sciences in Ireland. Dublin: Institute of Public Administration.

Inckle, K (2015) Promises, Promises… Lessons in Research Ethics from the Belfast Project and ‘The Rape Tape’ Case, Sociological Research Online 20(1): 6 http://www.socresonline.org.uk/20/1/6.html

Inckle, K (2007) Writing on the Body? Thinking Through Gendered Embodiment and Marked Flesh. Newcastle-upon-Tyne: Cambridge Scholars Publishing

Oakley, A (1981) Interviewing Women: A Contradiction in Terms, pp.30-61 in H Roberts (ed) Doing Feminist Research. London: Routledge.

Contributor
Dr Kay Inckle
Course Convener in Sociology
LSE
Blog/Bio | K.A.Inckle@lse.ac.uk

This post may be cited as:
Inckle K. (2017, 24 April) Ethical Self-Assessment: Excellence in Reflexivity or Corporatisation Gone Mad?. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/ethical-self-assessment-excellence-reflexivity-corporatisation-gone-mad

Ethical use of visual social media content in research publications2

 

At a research ethics workshop at the 2015 CSCW conference (Fiesler et al., 2015), researchers in our community respectfully disagreed about using public social media data for research without the consent of those who had posted the material. Some argued that researchers had no obligation to gain consent from each person whose data appeared in a public social media dataset. Others contended that, instead, people should have to explicitly opt in to having their data collected for research purposes. The issue of consent for social media data remains an ongoing debate among researchers. In this blog post, we tackle a much smaller piece of this puzzle, focusing on the research ethics but not the legal aspects of this issue: how should researchers approach consent when including screenshots of user-generated social media posts in research papers? Because analysis of visual social media content is a growing research area, it is important to identify research ethics guidelines.

We first discuss a few approaches to using user-generated social media images ethically in research papers. In a 2016 paper that we co-authored, we used screenshots from Instagram, Tumblr, and Twitter to exemplify our characterizations of eating disorder presentation online (Pater, Haimson, Andalibi, & Mynatt, 2016). Though these images were posted publicly, we felt uncomfortable using them in our research paper without consent from the posters. We used an opt-out strategy, in which we included content in the paper as long as people did not explicitly opt out. We contacted 17 people using the messaging systems on the social media site where the content appeared, gave them a brief description of the research project, and explained that they could opt out of their post being presented in the paper by responding to the message. We sent these messages in May 2015, and intended to remove people’s images from the paper if they responded before the paper’s final submission for publication five months later in October 2015. Out of the 17 people that we contacted, three people gave explicit permission to use their images in the paper, and the remaining 14 did not respond. Though this was sensitive content due to the eating disorder context, it did not include any identifiable pictures (e.g. a poster’s face) or usernames. While we were not entirely comfortable using content from the 14 people who did not give explicit permission, this seemed to be in line with ethical research practices within our research community (e.g. (Chancellor, Lin, Goodman, Zerwas, & De Choudhury, 2016), who did not receive users’ consent to use images, but did blur any identifiable features). We ultimately decided that including the images did more good than harm, considering that our paper contributed an understanding of online self-presentation for a marginalized population, which could have important clinical and technological implications. Another paper (Andalibi, Ozturk, & Forte, 2017) took a different approach to publishing user-generated visual content. Because the authors had no way of contacting posters, they instead created a few example posts themselves, which included features similar but not identical to the images in the dataset, to communicate the type of images they referenced in the paper. This is similar to what Markham (2012) calls “fabrication as ethical practice.”

This opt-out approach is only ethical in certain cases. For instance, it is not in line with the Australian National Statement on Ethical Conduct in Human Research (National Health and Medical Research Council, 2012), which we assume was not written with social media researchers as its primary audience. NHMRC’s Chapter 2.3 states that an opt-out approach is only ethical “if participants receive and read the information provided.” In a social media context, people may not necessarily receive and read information messaged to them. Additionally, researchers and ethics committees may not agree on whether or not these people are “participants” or whether such a study constitutes human subjects research. When using non-identifiable images, as we did in our study described above, and when the study’s benefit outweighs potential harm done to those who posted the social media content, we argue that an opt-out approach is appropriate. However, an opt-out approach becomes unethical when sensitive, personally-identifiable images are included in a research paper, as we discuss next.

While issues of consent when using social media content in research papers remains a thorny ongoing discussion, in certain instances we believe researchers’ decisions are more clear-cut. If social media content is identifiable – that is, if the poster’s face and/or name appears in the post – researchers should either get explicit consent from that person, de-identify the image (such as by blurring the photo and removing the name), or use ethical fabrication (Markham, 2012). Particularly, we strongly argue that when dealing with sensitive contexts, such as stigmatized identities or health issues, a person’s face and name should not be used without permission. As an example, let’s say that a woman posts a picture of herself using the hashtag #IHadAnAbortion in a public Twitter post. A researcher may argue that this photo is publicly available and thus is also available to copy and paste into a research paper. However, this ignores the post’s contextual integrity (Nissenbaum, 2009): when taking the post out of its intended context (a particular hashtag on Twitter), the researcher fundamentally changes the presentation and the meaning of the post. Additionally, on Twitter, the poster has the agency to delete[1] the post at her discretion, a freedom that she loses when it becomes forever embedded into a research paper and all of the digital and physically distributed copies of that paper. Thus, we argue that when including identifiable social media data in papers, researchers should be obligated to receive explicit permission from the person who posted that content, should they wish to include that image in the paper.

[1] Though all tweets are archived by the Library of Congress and thus not fully deletable, they are not readily accessible by the public, and even by most researchers. Furthermore, Twitter’s Terms of Service require those who collect data to periodically check for and remove deleted tweets from their datasets, though it is not clear whether this applies to the Library of Congress (Twitter, n.d.).

References:

Andalibi, N., Ozturk, P., & Forte, A. (2017). Sensitive Self-disclosures, Responses, and Social Support on Instagram: The Case of #Depression. In Proceedings of the 20th ACM Conference on Computer-Supported Cooperative Work & Social Computing. New York, NY, USA: ACM. http://dx.doi.org/10.1145/2998181.2998243

Chancellor, S., Lin, Z., Goodman, E. L., Zerwas, S., & De Choudhury, M. (2016). Quantifying and Predicting Mental Illness Severity in Online Pro-Eating Disorder Communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1171–1184). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2819973

Fiesler, C., Young, A., Peyton, T., Bruckman, A. S., Gray, M., Hancock, J., & Lutters, W. (2015). Ethics for Studying Online Sociotechnical Systems in a Big Data World. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing (pp. 289–292). New York, NY, USA: ACM. https://doi.org/10.1145/2685553.2685558

Markham, A. (2012). Fabrication as Ethical Practice. Information, Communication & Society, 15(3), 334–353. https://doi.org/10.1080/1369118X.2011.641993

National Health and Medical Research Council. (2012, February 10). Chapter 2.3: Qualifying or waiving conditions for consent. Retrieved December 13, 2016, from https://www.nhmrc.gov.au/book/national-statement-ethical-conduct-human-research-2007-updated-december-2013/chapter-2-3-qualif

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.

Pater, J. A., Haimson, O. L., Andalibi, N., & Mynatt, E. D. (2016). “Hunger Hurts but Starving Works”: Characterizing the Presentation of Eating Disorders Online. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1185–1200). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2820030 Twitter. (n.d.). Developer Agreement & Policy —

Twitter Developers. Retrieved December 13, 2016, from https://dev.twitter.com/overview/terms/agreement-and-policy

The contributors:
Oliver L. Haimson (University of California, Irvine) – EmailBio
Nazanin Andalibi (Drexel University) – Bio
Jessica Pater (Georgia Institute of Technology) – Bio

This post may be cited as:
Haimson O, Andalibi N and Pater J. (2016, 20 December) Ethical use of visual social media content in research publications. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/uncategorized/ethical-use-visual-social-media-content-research-publications

We don’t need a definition of research misconduct2

 

Responsibilities for ensuring the integrity of the research record rests with a number of players – funding agencies, governments, publishers, journal editors, institutions who conduct research and the researchers themselves. Our responsibilities for providing research that is honest and trustworthy are extant at the very beginning of a research project and ever present thereafter. If one of the players in the research ecosystem finds that research isn’t honest or can’t or shouldn’t be trusted then we have to take steps to remove it from the research record or stop it from getting there. We don’t need a definition of research misconduct in order to do that.

In fact, there isn’t a definition of research misconduct, and this is part of the problem. Resnik et.al. describe this in their 2015 paper that reviewed and categorized misconduct definitions from 22 out of the top 40 research and development funding countries. They claim that the variation in research misconduct definitions might make it harder for potential complainants to raise a concern because they can’t work out whether something might be misconduct in any particular jurisdiction. Similar research by Resnik et.al. also looked at research misconduct definitions in US universities, and found that the majority go beyond the definition provided in US law, perhaps indicating that these universities recognise that there is more than falsification, fabrication and plagiarism that can impact on the honesty and trustworthiness of the research record. A ‘back of the envelope’ review of Australian research misconduct policies paints a similar picture with two broad clades – one that centres on research misconduct as a serious deviation from accepted practice and the other that requires misrepresentation. All of this means that saying Professor Y committed research misconduct doesn’t really mean much, and doesn’t tell us how the research is dishonest or untrustworthy. It stops us from making our own assessment of the trustworthiness of the research.

Many definitions also require that it can be shown that the researcher responding to the allegation committed the act of research misconduct, however defined, deliberately or intentionally or with recklessness or negligence. This ‘mental fault’ element is used to distinguish those lapses in responsible research that are honest mistakes or accidental from deliberate, mischievous attempts to deceive the users of the research output, whether that is a journal article, lab meeting presentation or grant application. The inclusion of this mental fault element also focusses the attention of those considering complaints or serving on investigation panels on the minds of the ‘accused’ – the investigations very much become concentrated on whether Professor Y was really trying to be evil and not whether the research should be trusted and allowed to have impact.

We believe that this is the fundamental question a research integrity investigation should be considering – can we trust the research and would we be happy for it to have impact?

Consideration of mental fault (mens rea if you’re a lawyer) is important when considering what disciplinary action to take, but is best not part of the rubric when considering trustworthiness, accuracy or honesty of research.

Research conduct occurs on a spectrum – from excellent research conduct at one end to research misconduct at the other. It is not only those deliberate or grossly negligent acts that cause us to question the honesty or trustworthiness of research. There are a range of behaviours that impact on the integrity of research and many of these are neither deliberate nor FFP. Some of these are described in the seminal paper by Martinson et.al. that reports on results of a survey of biomedical researchers. The most frequent ‘questionable research practices’ described in this paper include inadequate record keeping related to research projects (27.5% of researchers), ‘dropping observations or data based on gut feeling’ (15.3%) and ‘using inadequate or inappropriate research designs’ (13.5%). It is clear that these three QRPs will impact on the trustworthiness and accuracy of research findings, and the incidence of these QRPs is much greater than the 0.3% reported for ‘falsifying or cooking research data’. These and other QRPs fall outside of many definitions of research misconduct, and so can be overlooked by institutions forced or who choose to focus on research misconduct as defined. This leaves a broad range of activities potentially unchecked, and research on the record that perhaps really shouldn’t be.

Removing the definition of research misconduct simplifies the landscape. Investigations won’t need to consider the motivation for a departure from accepted practice or breach, but only if the research can be trusted or should be allowed to have impact. Disciplinary action can still happen through other misconduct related processes and this is where deliberation and intent can and should be considered. A system like this already exists. The Canadian Tri-agency Framework for Responsible Conduct of Research does not define research misconduct but instead sets out very clearly articulated principles for research integrity. A breach of these principles can trigger an investigation and consideration of deliberation or intent is not part of the framework. The absence of a definition has not stopped Canadian funding agencies taking appropriate action. Recently, the first disclosure of an investigation was made. It names the researcher responsible and provides detail about the nature of the breach and the action taken by the funding agency involved.

Research misconduct is not a well-defined term, but a better definition is not needed and is not the solution. We need to take action to protect the integrity of the research record and stop untrustworthy or dishonest research from reaching it. We can do that just as well or even better without narrowing the scope of these considerations.

References

David B. Resnik J.D.,Ph.D., Lisa M. Rasmussen Ph.D. & Grace E. Kissling Ph.D. (2015) An International Study of Research Misconduct Policies, Accountability in Research, 22:5, 249-266, DOI: 10.1080/08989621.2014.958218

David B. Resnik J.D., Ph.D., Talicia Neal M.A., Austin Raymond B.A. & Grace E. Kissling Ph.D. (2015) Research Misconduct Definitions Adopted by U.S. Research Institutions, Accountability in Research, 22:1, 14-21, DOI: 10.1080/08989621.2014.891943

Nature 435, 737-738 (9 June 2005) | doi:10.1038/435737a

Contributors
Paul M Taylor, RMIT University (bio) – paul.taylor@rmit.edu.au
Daniel P Barr, University of Melbourne (bio)- dpbarr@unimelb.edu.au

This post may be cited as:
Taylor P and Barr DP. (2016, 25 October) We don’t need a definition of research misconduct. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/dont-need-definition-research-misconduct

Abuse of prisoners in the United States0

 

Mike Adorjan and Rose Ricciardelli’s edited collection, Engaging with Ethics in International Criminological Research, was recently published by Routledge. Of course, the book examines the likely suspects – ethical practices in relation to studies of policing, imprisonment and vulnerable populations. However, there are more unusual pieces on illuminating the Dark Net, carceral tours, and working in Hong Kong and China. My own contribution (Israel, 2016) examined the sad history of abuse of consent in research involving prisoners and prisons in the United States. It is an account of the exploitation of prisoners and a failure of criminologists to have any impact on the regulation and review of prison-based research.

Consent procedures have been created by research ethics regulators to protect research participants from abuse. In the United States, prisoners have been particularly vulnerable to the exploitative practices of researchers. However, contemporary consent procedures also stop researchers from uncovering institutional practices that exploit non-autonomous individuals. In doing so, research ethics regulation forms part of a broader strategy of self-protection established by public and private correctional services. Some scholars outside the United States have used covert research to evade prison protectionism. However, few have sought to link criminology’s understanding of state and state-corporate violence to the abuse of prisoners by researchers or extend their critique of protectionism to the work of research ethics regulators… I explore how requirements to obtain consent have been systematically evaded within prison-based research in the United States to the detriment of prisoners, but also how responses to scandal have led to the overprotection of institutions at the expense of prisoners’ ability to exercise autonomy, access justice, and benefit from the research process. Sadly, this chapter also demonstrates the apparent irrelevance of criminologists to the reform of regulation of research ethics in American prisons.

References

Israel, M (2016) A Short History of Coercive Practices: the Abuse of Consent in Research involving Prisoners and Prisons in the United States, in Adorjan, M and Ricciardelli, R (eds) Engaging with Ethics in International Criminological Research. London: Routledge. pp69-86. https://www.routledge.com/products/9781138938403

Contributor
Mark Israel is a senior consultant with AHRECS, adjunct professor of law and criminology at Flinders University and a visiting academic at The University of Western Australia.

This post may be cited as:
Israel M. (2016, 19 September) Abuse of prisoners in the United States. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/abuse-prisoners-united-states

0