ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyISSN 2206-2483

Public Debate

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research?0

 

The Australian Code for the Responsible Conduct of Research is presently under review. Issued jointly in 2007 by the National Health and Medical Research Council, the Australian Research Council and Universities Australia, the current code is a 41-page document divided into two parts. Part A, comprising some 22 pages, sets out the responsibilities of institutions and researchers for conducting sponsored research responsibly. Part B, comprising approximately 11 pages, provides advice on procedures for identifying and investigating instances of the conduct of research in which those responsibilities have not been fulfilled.

The current proposal is to replace this document with a five-page statement of eight principles of responsible research conduct and two lists of responsibilities, again of institutions and researchers, together with a 25-page guidance document (the first of several) of preferred procedures for the identification and investigation of research conduct that has not adhered to the responsibilities set out in the five-page code.

Among the innovations in these changes, other than a significant reduction in the size of the document, is the proposal that the expression ‘research misconduct’ not be used in the guide on identification and investigation but be replaced by the expression ‘breach’. An important reason given for this proposal is the avoidance of conflict with the requirements of institutional enterprise bargaining agreements (EBAs).

The scale of the proposed changes is likely to generate extensive debate and this will have been disclosed in the course of the consultation period that closed earlier this year. The consultation process conformed to the minimal requirements of sections 12 and 13 of the NHMRC Act. This is a process that publicises, by formal means, drafts of proposed changes to which responses are sought. Current practice is to prefer provision of responses by electronic means and to require responders to answer questions determined by the Council. The passivity and formality of the process tends to attract and privilege better resourced interests. In some of the published debate that occurred during the consultation period, there was much attention to the change in scale and to the proposal not to refer to research misconduct but only to breach. This level of discussion risks ignoring several underlying systemic questions, or assuming the answers to them. It is the purpose of this brief opinion to tease out these questions.

The key premise of these remarks on the existing Code and any revision is that the Code constitutes a form of regulation of research conduct. With this premise comes a centrally important question: what are the aims of the regulation of this activity?

The apparent aims of the revision are the definition of responsible research conduct and relevant responsibilities, the identification, disclosure and investigation of failures to conduct research responsibly.

Underlying these aims lie some broader and deeper considerations. These include whether the purpose to be served by regulation of research is to:

  • protect the reputation of research;
  • prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research;
  • protect the reputation of research institutions; to prevent the waste – or even the risk of waste – of public research funds;
  • penalise those who fail to fulfil their research responsibilities, whether the failures are on the part of institutions or individual researchers;
  • protect the public interest in research by promoting productive use of public research funds, and rewarding responsible researchers and institutions.

It is a regulatory situation not unlike that which faced environmental protection through the 1990s and later and other areas such as oil rig and building safety in the UK. One lesson from these experiences is that where the aims of regulation are the protection of the environment, the safety of buildings or oil rigs, they are more likely to be achieved by giving those who conduct relevant activities the opportunity to devise their own methods to achieve those regulatory aims, methods that are then assessed by a responsible authority against a set of standards. The shift from the tight prescription of safety standards to some form of well-defined and supervised self-regulation appears to have been successful in achieving regulatory aims.

The choice of which of the above purposes is to be served will have a direct and profound effect on the methods to be used. For example, if the purpose were the protection of the reputation of research institutions, it would not be surprising to extend a significant degree of autonomy to institutions to set up their own procedures and methods for promoting responsible conduct and so establishing their good reputation. However, there would be an incentive for institutions not to publicly disclose instances of irresponsible research but to manage these institutionally. Reliance on the need to conform to enterprise bargaining agreements might lend support to justification of such non-disclosure.

If the purpose were to penalise those institutions or researchers who fail to fulfil relevant responsibilities for responsible research conduct, the system would need to define those responsibilities with some precision, so that the definitions could be made enforceable, and to establish an agency with appropriate investigation powers and sanctioning authority to identify, investigate and reach decisions as to whether relevant responsibilities had or had not been fulfilled.

A relevant regulatory model may not be that of criminal prosecution but rather of corruption investigation. There is a public interest that motivates the establishment and operation of anti-corruption agencies. The outcomes of their enquiries can lay the foundation for individual punishment of those found ‘guilty’ of corrupt behaviour, and those proceedings are then taken up by different state agencies. Research integrity policy can be seen to have similar aims: first, to protect the public interest by empowering an independent agency to uncover corrupt conduct, and, second, following such a finding, to prosecute individuals by a separate process. A research integrity agency could be given the task of investigating and finding research misconduct, leaving to employers of those individuals the responsibility to impose consequences. Although remaining autonomous in following their own procedures, and so conformingh to EBAs, institutions would be likely to find it difficult to conceal the process because of the public finding of research misconduct that they are implementing.

The debate so far appears to have left most of these underlying questions either unanswered or to have assumed answers to them. Because this has not been explicit, those answers are unlikely to be consistent. For example, the chosen terminology discloses some of these assumptions. The responsibilities described in the five-page code are in very general form that would present considerable difficulties if they were to be used to determine whether they had been fulfilled. For example, what evidence would constitute a failure on the part of an institution to fulfil the obligation to develop and maintain the currency and ready availability of a suite of policies and procedures which ensure that institutional practices are consistent with the principles and responsibilities of the Code? Or, what evidence would constitute a failure on the part of a researcher to fulfil the obligation to foster, promote and maintain an approach to the development, conduct and reporting of research based on honesty and integrity? The very breadth and generality of the language used in these statements suggest that the purpose is not their enforcement.

A further example is the proposal not to use the expression research misconduct in the document, but to refer to breaches of the Code. The language of breach is applied better to duties, rules or standards that are drafted with the intent of enforcement so that it can be clear when evidence discloses a breach and when it does not. Casting the substantive document in the form of responsibilities makes this difficult. In common language, responsibilities are either fulfilled or they are not and where they are not, it is common to speak of a failure to fulfil the responsibility rather than a breach. The use of the language betrays a confusion of underlying purposes.

The advocates of an enforcement approach have argued for a national research integrity agency, like that in some other Western nations. There may, however, be a simpler, more politically and fiscally feasible model available.

If the underlying purposes are to protect the reputation of research as a public interest, to prevent waste and misguided work that can follow from relying on irresponsible and inaccurate research and to prevent waste or the risk of waste of public research funds, then the mode of regulation would be more likely to resource the training of researchers, the guidance of institutions in establishing appropriate research environments and the public promotion of responsible and effective research. The response to irresponsible research conduct would be directed at the withdrawal from the public arena of unsupported and inaccurate results, appropriate disclosure of these (e.g. to journal editors and research funding agencies) and appropriate apologies from responsible institutions and researchers supported with undertakings for reform of faulty procedures and practices.

In implementing these purposes, it would not be surprising for the system to give significant authority to both public research funding agencies. This could include, for instance, authority to ensure that institutions seeking access to their funds establish appropriate procedures to ensure responsible research conduct, including sufficient and sustained training of researchers, adequate resources and research facilities and appropriate auditing and reporting of research conduct. Agency authority could also include an entitlement to establish not only whether researchers who seek or have access to research funding have research records free of irresponsibility, but also that eligible institutions did not have current employees with such records.

Access to research funding has been a potent motivator in the institutional establishment of human research ethics committees, both in the United Kingdom, as Adam Hedgecoe (2009) has shown, and in Australia where the NHMRC’s 1985 decision required institutions to establish institutional ethics committees if they wanted access to research funds with which to conduct human research. In both cases, the decisions were followed by a notable increase in the number of institutional research ethics committees.

An approach that actively promotes responsible research practice may be more likely to achieve wider conformity with good practice standards than a focus on identifying, investigating and punishing failures to meet those standards. If so, the first better practice guide would be how to promote responsible conduct of research; it would not be how to identify investigate and respond to poor research conduct. Indeed, responsible institutions could pre-empt any such requirements by unilaterally setting up programs to instruct researchers in responsible conduct, train and embed research practice advisers in strategic research disciplines, reward examples of responsible research that enhance both researcher and institutional reputations and establish a reliable and comprehensive record keeping system of research. This is an argument that Allen and Israel (in press) make in relation to research ethics.

Australia has an opportunity to adopt a constructive and a nationally consistent approach to the active promotion of good research practice. It would be more likely to achieve this with a code that was not constrained by institutional self-interest nor confined by a punitive focus.

References

Allen, G and Israel, M (in press, 2017) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R and Tolich, M (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage.

Hedgecoe, A (2009) A Form of Practical Machinery, The Origins of Research Ethics Committees in the UK, 1967–1972, Medical History, 53: 331–350

Contributor
Prof Colin Thomson is one of the Senior Consultants at AHRECS. You can view his biography here and contact him at colin.thomson@ahrecs.com,

This post may be cited as:
Thomson C. (2017, 22 May) Cracking the Code: Is the Revised Australian Code likely to ensure Responsible Conduct of Research? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/cracking-code-revised-australian-code-likely-ensure-responsible-conduct-research

‘Except as required by law’: Australian researchers’ legal rights and obligations regarding participant confidentiality0

 

Anna Olsen, Research School of Population Health, ANU Julie Mooney-Somers, Centre for Values, Ethics and the Law in Medicine, University of Sydney
*Neither of us are lawyers and, as such, our interpretations are as social scientists and HREC members. Interested lawyers and legal scholars are encouraged to contribute!

Researchers’ promises of confidentiality are often easily and genuinely made. However, our experience in research ethics review (Julie through an NGO-run ethics review committee; Anna through formally constituted university and hospital human research ethics committees), in qualitative research and in teaching qualitative research ethics has led us to think about the limits of these promises.

Australian researchers generally rely on the National Statement (National Health and Medical Research Council, 2015) and Human Research Ethics Committees (HRECs) for guidance around ethical and legal conduct in research. For example, Chapter 4.6 in the National Statement notes that researchers may discover illegal activity and guides researchers and HRECs to consider what researchers might be obliged to disclose in a legal situation and how to best protect (and inform) participants of this threat to confidentiality.

The National Statement is currently under revision (National Health and Medical Research Council, 2016) and the review submitted for public consultation in late-2016 contains a proposal to include additional information on “Disclosure to third parties of findings or results” in Section 3 of the National Statement. Here the NHMRC explicitly state that: “There can be situations where researchers have a legal, contractual or professional obligation to divulge findings or results to third parties”. That is, researchers should concern themselves not only with the legal implications of revealing potential illegal activity, but any instance in which they may be asked to break participant confidentiality.

The recent review of the National Statement extends the NHMRC recommendations around potential data disclosure in a number of ways: it makes much more explicit that researchers (as opposed to HRECs or institutions) are responsible for understanding the risks to patient confidentiality: “researchers should be aware of situations where a court, law enforcement agency or regulator may seek to compel the release of findings or results”. Researchers are expected to anticipate legal risks to participant confidentiality by: identifying “(a) whether, to whom and under what circumstances the findings or results will be disclosed; (b) whether potential participants will be forewarned that there may be such a disclosure; (c) the risks associated with such a disclosure and how they will be managed; and (d) the rationale for communicating and/or withholding the findings or results and the relative benefits and/or risks to participants of disclosure/non-disclosure”. And, researchers should advise participants on legal risks to confidentiality and how they will be handled: “(a) have a strategy in place to address this possibility; (b) advise participants of the potential for this to occur; and (c) advise participants as to how the situation will be managed”.

For many researchers in health, legal risks are a very vague reality and legal intervention a remote threat. They may feel confident that their research does not and will not uncover illegal activity, or that their data would simply be irrelevant to a legal case. Or they may feel confident that they have taken sufficient steps to protect their participants’ confidentiality by following guidelines; researchers working in illicit drug use, for example.

Many Australian HRECs articulate the NHMRC guidelines on legal risks of disclosure to third parties by requiring that researchers inform participants that any data collected during research will kept confidential, “except as required by law”. In keeping with the ethical concept of informed consent, participants are thereby warned that researchers are not able to unconditionally offer confidentially. It has become clear to us that the intention of this phrase, to flag the legal limits of confidentiality, is not well understood by researchers (Olsen & Mooney-Somers, 2014).

The National Statement details some aspects of human research that is subject to specific statutory regulation however stresses that compliance with legal obligations is not within the scope of the National Statement: “It is the responsibility of institutions and researchers to be aware of both general and specific legal requirements, wherever relevant”. Moreover, in the document we are directed that it is not the role of a HREC to provide legal advice. It is relatively rare for Australian HRECs to provide explicit guidance on the relevant legal obligations for researchers, including: how they differ across jurisdictions; what protective strategies researchers could employ to better protect patient confidentiality; or how to best inform participants about the risks of legal action (Some useful HREC-produced resources are Alfred Hospital Ethics Committee, 2010; QUT Office of Research Ethics and Integrity, 2016) Criminology scholars have (unsurprisingly) considered these issues in their own field (Chalmers & Israel. 2005; Israel, 2004; Israel & Gelsthorpe, 2017; Palys & Lowman, 2014).

We believe there are real risks to participants, researchers and research institutions.

Recent international cases of research dealing with illegal activity becoming subject to legal action include The Belfast Project/The Boston Tapes (BBC News, 2014; Emmerich, 2016; Israel, 2014) and Bradley Garrett’s ethnographic work with urban explorers (Fish, 2014; Times Higher Education, 2014) (See also Israel & Gelsthorpe, 2017). On the whole, legal action was anticipatable in these cases as they involved illicit activities and the legal action was driven by law enforcement interest. In some instances, researchers took extensive steps to protect participant confidentiality. In other cases the promise of absolute confidentiality seems a little naïve (and in our opinion, perhaps negligent).

Perhaps of more concern are cases in which legal action was instigated by interested others, not law enforcement. Of particular interest to us are recent cases of tobacco companies using Freedom of Information laws in Australia to obtain research data from Cancer Council Victoria on young people’s attitudes to and use of tobacco, and an earlier attempt to seek data on adults from Cancer Council NSW (McKenzie & Baker, 2015; Schetzer & Medew, 2015). As these cases do not involve illegal activity, it is much less likely that researchers could have anticipated the specific legal actions that undermined participant confidentiality. (The tobacco industry has taken these actions in other countries (Hastings, 2015; McMurtrie, 2002)).

Our point here is that the promise of confidentiality should never be casually made. Researchers have an ethical obligation to think through what “except as required by law” may mean for each particular research project. Although it has been argued elsewhere that as professionals, researchers should be provided the same participant confidentiality rights as doctors and lawyers (Emmerich, 2016), the current state of affairs is that research data is not (necessarily) safe from legal, contractual or professional obligation to divulge findings or results to third parties.

References:

Alfred Hospital Ethics Committee. (2010, Updated September 2016). Alfred Hospital ethics committee guidelines: Research that potentially involves legal risks for participants and researchers. Retrieved from https://www.alfredhealth.org.au/contents/resources/research/Research-involving-legal-risks.pdf

BBC News. (1 May 2014). What are the Boston tapes? Retrieved from http://www.bbc.com/news/uk-northern-ireland-27238797

Chalmers, R., & Israel, M. (2005). Caring for Data: Law, Professional Codes and the Negotiation of Confidentiality in Australian Criminological Research. Retrieved from http://crg.aic.gov.au/reports/200304-09.pdf

Emmerich, N. (9 December 2016). Why researchers should get the same client confidentiality as doctors. Retrieved from https://theconversation.com/why-researchers-should-get-the-same-client-confidentiality-as-doctors-69839

Fish, A. (23 May 2014). Urban geographer’s brush with the law risks sending cold chill through social science. Retrieved from https://theconversation.com/urban-geographers-brush-with-the-law-risks-sending-cold-chill-through-social-science-25961

Hastings, G. (31 August 2015). We got an FOI request from Big Tobacco – here’s how it went. Retrieved from https://theconversation.com/we-got-an-foi-request-from-big-tobacco-heres-how-it-went-46457

Israel, M. (2004). Strictly confidential? Integrity and the disclosure of criminological and socio-legal research. British Journal of Criminology, 44(5), 715-740.

Israel, M. (6 May 2014). Gerry Adams arrest: when is it right for academics to hand over information to the courts? Retrieved from https://theconversation.com/gerry-adams-arrest-when-is-it-right-for-academics-to-hand-over-information-to-the-courts-26209

Israel, M., & Gelsthorpe, L. (2017). Ethics in Criminological Research: A Powerful Force, or a Force for the Powerful? . In M. Cowburn, L. Gelsthorpe, & A. Wahidin (Eds.), Research Ethics in Criminology and Criminal Justice: Politics, Dilemmas, Issues and Solutions. London: Routledge.

McKenzie, N., & Baker, R. (15 August 2015). Tobacco company wants schools survey for insights into children and teens. The Age. Retrieved from http://www.theage.com.au/national/tobacco-company-wants-schools-survey-for-insights-into-children-and-teens-20150819-gj2vto.html

McMurtrie, B. (8 February 2002). Tobacco companies seek university documents. Chronicle of Higher Education. Retrieved from http://www.chronicle.com/article/Tobacco-Companies-Seek/6959

National Health and Medical Research Council. (2015). National Statement on Ethical Conduct in Human Research (2007) Retrieved from https://www.nhmrc.gov.au/printpdf/book/export/html/51613

National Health and Medical Research Council. (2016). Public consultation on Section 3 (chapters 3.1 & 3.5), Glossary and Revisions to Section 5: National Statement on Ethical Conduct in Human Research (2007). Retrieved from https://consultations.nhmrc.gov.au/files/consultations/drafts/ns-section3-public-consultation.pdf

Olsen, A., & Mooney-Somers, J. (2014). Is there a problem with the status quo? Debating the need for standalone ethical guidelines for research with people who use alcohol and other drugs. Drug Alcohol Rev, 33(6), 637-642. doi:10.1111/dar.12140

Palys, T., & Lowman, J. (2014). Protecting research confidentiality: What happens when law and ethics collide. Toronto: Lorimer.

QUT Office of Research Ethics and Integrity. (10 Novembeer 2016). Participants and illegal activities. Retrieved from http://www.orei.qut.edu.au/human/guidance/illegal.jsp

Schetzer, A., & Medew, J. (20 August 2015). Cancer Council spends thousands fighting big tobacco over children’s survey data. The Sydney Morning Herald. Retrieved from http://www.smh.com.au/national/cancer-council-spends-thousands-fighting-big-tobacco-over-childrens-survey-data-20150820-gj3nh7.html

Times Higher Education. (5 June 2014). Place-hacker Bradley Garrett: research at the edge of the law. Retrieved from https://www.timeshighereducation.com/features/place-hacker-bradley-garrett-research-at-the-edge-of-the-law/2013717.article

Contributors

Anna Olsen is a Senior Lecturer at the Research School of Population Health, Australian National University. She leads a number of qualitative and mixed methods public health research projects, teaches qualitative research methods and supervises post-graduate students. Dr Olsen is an experienced member of formally constituted university and hospital human research ethics committees. https://researchers.anu.edu.au/researchers/olsen-phd-am

Julie Mooney-Somers is a Senior Lecturer in Qualitative Research in the Centre for Values, Ethics and the Law in Medicine, University of Sydney. She is the director of the Masters of Qualitative Health Research at the University of Sydney. An experienced qualitative researcher, teacher and supervisor, she has taught qualitative research ethics and sat on a NGO-run ethics review committee for six years. http://sydney.edu.au/medicine/people/academics/profiles/julie.mooneysomers.php and http://www.juliemooneysomers.com

This post may be cited as:
Olsen A, and Mooney-Somers J. (2017, 24 February) ‘Except as required by law’: Australian researchers’ legal rights and obligations regarding participant confidentiality. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/except-required-law-australian-researchers-legal-rights-obligations-regarding-participant-confidentiality

Ethical use of visual social media content in research publications1

 

At a research ethics workshop at the 2015 CSCW conference (Fiesler et al., 2015), researchers in our community respectfully disagreed about using public social media data for research without the consent of those who had posted the material. Some argued that researchers had no obligation to gain consent from each person whose data appeared in a public social media dataset. Others contended that, instead, people should have to explicitly opt in to having their data collected for research purposes. The issue of consent for social media data remains an ongoing debate among researchers. In this blog post, we tackle a much smaller piece of this puzzle, focusing on the research ethics but not the legal aspects of this issue: how should researchers approach consent when including screenshots of user-generated social media posts in research papers? Because analysis of visual social media content is a growing research area, it is important to identify research ethics guidelines.

We first discuss a few approaches to using user-generated social media images ethically in research papers. In a 2016 paper that we co-authored, we used screenshots from Instagram, Tumblr, and Twitter to exemplify our characterizations of eating disorder presentation online (Pater, Haimson, Andalibi, & Mynatt, 2016). Though these images were posted publicly, we felt uncomfortable using them in our research paper without consent from the posters. We used an opt-out strategy, in which we included content in the paper as long as people did not explicitly opt out. We contacted 17 people using the messaging systems on the social media site where the content appeared, gave them a brief description of the research project, and explained that they could opt out of their post being presented in the paper by responding to the message. We sent these messages in May 2015, and intended to remove people’s images from the paper if they responded before the paper’s final submission for publication five months later in October 2015. Out of the 17 people that we contacted, three people gave explicit permission to use their images in the paper, and the remaining 14 did not respond. Though this was sensitive content due to the eating disorder context, it did not include any identifiable pictures (e.g. a poster’s face) or usernames. While we were not entirely comfortable using content from the 14 people who did not give explicit permission, this seemed to be in line with ethical research practices within our research community (e.g. (Chancellor, Lin, Goodman, Zerwas, & De Choudhury, 2016), who did not receive users’ consent to use images, but did blur any identifiable features). We ultimately decided that including the images did more good than harm, considering that our paper contributed an understanding of online self-presentation for a marginalized population, which could have important clinical and technological implications. Another paper (Andalibi, Ozturk, & Forte, 2017) took a different approach to publishing user-generated visual content. Because the authors had no way of contacting posters, they instead created a few example posts themselves, which included features similar but not identical to the images in the dataset, to communicate the type of images they referenced in the paper. This is similar to what Markham (2012) calls “fabrication as ethical practice.”

This opt-out approach is only ethical in certain cases. For instance, it is not in line with the Australian National Statement on Ethical Conduct in Human Research (National Health and Medical Research Council, 2012), which we assume was not written with social media researchers as its primary audience. NHMRC’s Chapter 2.3 states that an opt-out approach is only ethical “if participants receive and read the information provided.” In a social media context, people may not necessarily receive and read information messaged to them. Additionally, researchers and ethics committees may not agree on whether or not these people are “participants” or whether such a study constitutes human subjects research. When using non-identifiable images, as we did in our study described above, and when the study’s benefit outweighs potential harm done to those who posted the social media content, we argue that an opt-out approach is appropriate. However, an opt-out approach becomes unethical when sensitive, personally-identifiable images are included in a research paper, as we discuss next.

While issues of consent when using social media content in research papers remains a thorny ongoing discussion, in certain instances we believe researchers’ decisions are more clear-cut. If social media content is identifiable – that is, if the poster’s face and/or name appears in the post – researchers should either get explicit consent from that person, de-identify the image (such as by blurring the photo and removing the name), or use ethical fabrication (Markham, 2012). Particularly, we strongly argue that when dealing with sensitive contexts, such as stigmatized identities or health issues, a person’s face and name should not be used without permission. As an example, let’s say that a woman posts a picture of herself using the hashtag #IHadAnAbortion in a public Twitter post. A researcher may argue that this photo is publicly available and thus is also available to copy and paste into a research paper. However, this ignores the post’s contextual integrity (Nissenbaum, 2009): when taking the post out of its intended context (a particular hashtag on Twitter), the researcher fundamentally changes the presentation and the meaning of the post. Additionally, on Twitter, the poster has the agency to delete[1] the post at her discretion, a freedom that she loses when it becomes forever embedded into a research paper and all of the digital and physically distributed copies of that paper. Thus, we argue that when including identifiable social media data in papers, researchers should be obligated to receive explicit permission from the person who posted that content, should they wish to include that image in the paper.

[1] Though all tweets are archived by the Library of Congress and thus not fully deletable, they are not readily accessible by the public, and even by most researchers. Furthermore, Twitter’s Terms of Service require those who collect data to periodically check for and remove deleted tweets from their datasets, though it is not clear whether this applies to the Library of Congress (Twitter, n.d.).

References:

Andalibi, N., Ozturk, P., & Forte, A. (2017). Sensitive Self-disclosures, Responses, and Social Support on Instagram: The Case of #Depression. In Proceedings of the 20th ACM Conference on Computer-Supported Cooperative Work & Social Computing. New York, NY, USA: ACM. http://dx.doi.org/10.1145/2998181.2998243

Chancellor, S., Lin, Z., Goodman, E. L., Zerwas, S., & De Choudhury, M. (2016). Quantifying and Predicting Mental Illness Severity in Online Pro-Eating Disorder Communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1171–1184). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2819973

Fiesler, C., Young, A., Peyton, T., Bruckman, A. S., Gray, M., Hancock, J., & Lutters, W. (2015). Ethics for Studying Online Sociotechnical Systems in a Big Data World. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing (pp. 289–292). New York, NY, USA: ACM. https://doi.org/10.1145/2685553.2685558

Markham, A. (2012). Fabrication as Ethical Practice. Information, Communication & Society, 15(3), 334–353. https://doi.org/10.1080/1369118X.2011.641993

National Health and Medical Research Council. (2012, February 10). Chapter 2.3: Qualifying or waiving conditions for consent. Retrieved December 13, 2016, from https://www.nhmrc.gov.au/book/national-statement-ethical-conduct-human-research-2007-updated-december-2013/chapter-2-3-qualif

Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.

Pater, J. A., Haimson, O. L., Andalibi, N., & Mynatt, E. D. (2016). “Hunger Hurts but Starving Works”: Characterizing the Presentation of Eating Disorders Online. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 1185–1200). New York, NY, USA: ACM. https://doi.org/10.1145/2818048.2820030 Twitter. (n.d.). Developer Agreement & Policy —

Twitter Developers. Retrieved December 13, 2016, from https://dev.twitter.com/overview/terms/agreement-and-policy

The contributors:
Oliver L. Haimson (University of California, Irvine) – EmailBio
Nazanin Andalibi (Drexel University) – Bio
Jessica Pater (Georgia Institute of Technology) – Bio

This post may be cited as:
Haimson O, Andalibi N and Pater J. (2016, 20 December) Ethical use of visual social media content in research publications. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/uncategorized/ethical-use-visual-social-media-content-research-publications

Comparing research integrity responses in Australia and The Netherlands3

 

Last year, I was invited by Tracey Bretag to contribute a chapter to the Handbook of Academic Integrity. The invite was a little unusual – find another author and work with him or her to compare how research integrity has been dealt with in your two jurisdictions. It could have gone horribly wrong. It didn’t, for two reasons. First, I was delighted to be partnered with Pieter Drenth whose career happens to include a period as Vice-Chancellor of VU University Amsterdam, and President of both the Royal Netherlands Academy of Arts and Sciences and the ALLEA (All European Academies). Secondly, the exercise of comparing research integrity strategies in Australia and The Netherlands actually proved to be highly productive, helped me understand some of the strengths and weaknesses of the Australian response, and was a timely reminder of the value of assessing critically what might be happening in research integrity beyond the borders of our own countries.

The chapter has just been published (Israel and Drenth, 2016). In it, we argue…

In Australia and the Netherlands, research institutions and their funders, as well as academics, state integrity agencies, judges, governments, and journalists, have contributed to the development of rules and procedures that might help prevent, investigate, and respond to research fraud and misconduct. Both countries have experienced scandals and have ended up with codes, investigatory committees, and national research integrity committees.

National policy has created a series of expectations for research institutions. However, in both countries, the primary responsibility for research integrity remains with the institutions under whose auspices the research is carried out, as well as with the researchers themselves. Research institutions have to decide how to respond to misconduct, albeit in ways that are open to scrutiny by national advisory committees, the media, courts, and state accountability mechanisms. As a result, many institutions have amended and sharpened their own codes and regulations; refined their mechanisms for advising staff, reporting and investigating suspected misconduct, and responding to findings of misconduct; improved their protection rules for whistleblowers; regulated data storing and archiving; and sought to foster greater transparency in both their research and research integrity procedures. However, while researchers have been encouraged to embed awareness and acknowledgment of these principles through teaching, supervision, and mentoring of students and junior staff, less effort has been placed on resourcing good practice, tracing and understanding the causes of misconduct, and on fostering and entrenching a research culture invested with the values of professional responsibility and integrity.

Other chapters in the research integrity part of the Handbook compare combinations of Austria, Canada, the United States, Argentina, Brazil, China and Korea, and there are plenty of other contributors to the volume from Australia, including Brian Martin (with one chapter on plagiarism and another on the relationship between integrity and fraud – a topical issue, given the recent sentencing of Bruce Murdoch) and David Vaux (scientific misconduct).

Reference
Israel, M & Drenth, PJD (2016) Research Integrity in Australia and the Netherlands. In Bretag, T (ed.) Handbook of Academic Integrity. Springer. ISBN 978-981-287-097-1 http://www.springer.com/gp/book/9789812870971

Contributor
Mark Israel is professor of law and criminology at The University of Western Australia.
Click here for Mark’s AHRECS bio
Click here for Mark’s UWA page

This post may be cited as: Israel M. (2016, 17 May) Comparing research integrity responses in Australia and The Netherlands. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/research-integrity/comparing-research-integrity-responses-australia-netherlands

Page 1 of 212