ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

Outputs

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The Ethics and Politics of Qualitative Data Sharing0

 

Mark Israel (AHRECS and Murdoch University) and Farida Fozdar (The University of Western Australia).

There is considerable momentum behind the argument that public data is a national asset and should be made more easily available for research purposes. In introducing the Data Sharing and Release Legislative Reforms Discussion Paper in September 2019, the Australian Commonwealth Minister for Government Services argued that proposed changes to data use in the public sector would mean that

Australia’s research sector will be able to use public data to improve the development of solutions to public problems and to test which programs are delivering as intended—and which ones are not.

Data reuse is seen as a cost-efficient use of public funds, reducing the burden on participants and communities. And, the argument is not restricted to government.  Journals, universities and funding agencies are increasingly requiring social scientists to make their data available to other researchers, and even to the public, in the interests of scientific inquiry, accountability, innovation and progress. For example, the Research Councils United Kingdom (RCUK) takes the benefits associated with data sharing for granted

Publicly-funded research data are a public good, produced in the public interest; Publicly-funded research data should be openly available to the maximum extent possible.

In Australia, both the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC) have adopted open access policies that apply to research funded by those councils. While the ARC policy only refers to research outputs and excludes research data and research data outputs, the NHMRC strongly encourages open access to research data.

And yet, several social researchers have argued that data sharing requirements, developed in the context of medical research using quantitative data, may be inappropriate for qualitative research. Their arguments rest on a mix of ethical, practical and legal grounds.

In an article entitled ‘Whose Data Are They Anyway?’, Parry and Mauthner (2004) recognised unique issues associated with archiving qualitative data. The main considerations are around confidentiality (is it possible to anonymise the data by changing the details without losing validity) and informed consent (can participants know and consent to all potential future uses of their data at a single point in time?, and alternatively what extra burden do repeated requests for consent place on participants?).

There is also the more philosophical issue of the reconfiguration of the relationship between researchers and participants including moral responsibilities and commitments, potential violations of trust, and the risk of data misrepresentation. There are deeper epistemological issues, including the joint construction of qualitative data, and the reflexivity involved in preparing data for secondary analysis. As a result, Mauthner (2016) critiqued ‘regulation creep’ whereby regulators in the United Kingdom have made data sharing a moral responsibility associated with ethical research, when in fact it may be more ethical not to share data.

In addition, there is a growing movement to recognise the rights of some communities to control their own data. Based on the fundamental principle of self-determination, some Indigenous peoples have claimed sovereignty over their own data: ‘The concept of data sovereignty, … is linked with indigenous peoples’ right to maintain, control, protect and develop their cultural heritage, traditional knowledge and traditional cultural expressions, as well as their right to maintain, control, protect and develop their intellectual property over these.’ (Tauli-Corpuz, in Kukutai and Taylor, 2016:xxii). The goal is that its use should enhance self-determination and development.

To be fair to both the Commonwealth Minister and the RCUK, each recognises that data sharing should only occur prudently and safely and acknowledges that the benefits of sharing need to be balanced against rights to privacy (the balance proposed for earlier Australian legislative proposals have already been subjected to academic critique). The challenge is to ensure that our understanding of how these competing claims should be assessed is informed by an understanding of the nature of qualitative as well as quantitative data, of how data might be co-constructed or owned, of the cultural sensitivity that might be required to interpret and present it, and the damage that might be done as a result of misuse or  misrepresentation.

Acknowledgements
This article draws on material drafted for Fozdar and Israel (under review).
.

References:

Fozdar, F. and Israel, M. (under review) Sociological ethics. In Mackay, D. and Iltis, A. (eds) The Oxford Handbook of Research Ethics. Oxford: Oxford University Press.

Kukutai, T. and Taylor, J. (Eds.) (2016) Indigenous data sovereignty: Toward an agenda (Vol. 38). Canberra: ANU Press.

Mauthner, N.S. (2016) Should data sharing be regulated? In van den Hoonard, W. and Hamilton, A. (eds) The Ethics Rupture: Exploring alternatives to formal research-ethics review. University of Toronto Press. pp.206-229.

Parry, O. and Mauthner, N.S. (2004) Whose data are they anyway? Practical, legal and ethical issues in archiving qualitative research data. Sociology, 38(1), 139-152.

This post may be cited as:
Israel, M. & Fozdar, F. (5 February 2020) The Ethics and Politics of Qualitative Data Sharing. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-ethics-and-politics-of-qualitative-data-sharing

The research use of online data/web 2.0 comments0

 

Does it require research ethics review and specified consent?

Dr Gary Allen
AHRECS Senior Consultant

The internet is a rich source of information for researchers. On the Web 2.0 we see extensive commentary on numerous life matters, which may be of interest to researchers in a wide range of (sub)disciplines. Research interest in these matters frequently prompts the following questions –Can I use that in my project? Hasn’t that already been published? Is research ethics review required? Is it necessary to obtain express consent for the research use?

It’s important to recognise that these questions aren’t posed in isolation. Cases like the OkCupid data scraping scandal, the Ashley Madison hack, Emotional Contagion, Cambridge Analytica and others provide a disturbing context.  At a time when the use of the internet and social media is startingly high (Nielsen 2019, Australian Bureau of Statistics 2018, commentaries such as the WebAlive blog 2019), there is also significant distrust of the platforms people are using. Consequently, there are good reasons for researchers and research ethics reviewers to be cautious about use of existing material for research, even if the terms and conditions of a site/platform specifically discuss research.

Like many ethics questions, there isn’t a single simple answer that is correct all the time.  The use of some kinds of data for research may not meet the National Statement’s definition of human research. Use of other kinds of data may meet that definition but will be exempt from review and so not require explicit consent. Use of other kinds of data or other uses of data that involves no more than low risk can be reviewed outside an HREC meeting and others will actually have to be considered at an HREC meeting.

AHRECS proposes a three-part test, which can be applied to individual projects to test whether a proposed use of internet data is human research and needs ethics review and this will also guide whether explicit and project-specific consent is required. If this test is formally adopted by an institution and by its research ethics committees, it would provide a transparent, consistent, and predictable way to judge these matters.

You can find a word copy of the questions, as well as a png and pdf copy of the flow diagram in our subscribers’ area.
.

For institutions
https://ahrecs.vip/flow…
$350/year
.

For individuals
https://www.patreon.com/posts/flow…
USD10/month
.

 

For any questions email enquiry@ahrecs.com

Part One of this test is whether the content of a site or platform is publicly available. One component of this test is whether the researcher will be using scraping, spoofing or hacking of the site/platform to obtain information.
.

Part Two of the test relates to whether individuals have consented and will be reasonably identifiable from the data and its proposed research use and whether there are risks to those individuals.  A component of this test is exploring whether an exemption from the consent requirement is necessary (i.e. as provided for by paragraphs 2.3.9 -12 of the National Statement and are lawful under any privacy regulation that applies).

Part Three of the test relates to how the proposed project relates to the national human research ethics guidelines – the National Statement – and whether there are any matters that must be considered by a human research ethics committee.  For example, Section 3 of the National Statement (2007 updated 2018) discusses some methodological matters and Section 4 some potential participant issues that must be considered by an HREC.

Individually, any one of these parts could determine that review and consent is required. But meeting all three parts of the test is necessary to indicate that the work is exempt before a project can be exempted from review.

Even if the tests indicate review/consent is required, that doesn’t mean the research is ethically problematic, just a project requires for more due consideration.

The implication of this is that not all research based upon online comments or social media posts can be exempted from review but, conversely, not all such work must be ethically reviewed.  The approach that should be taken depends upon project-specific design matters.  A strong and justifiable institutional process will have nuanced criteria on these matters.  Failing to establish transparent and predictable policies would be a serious lapse in an important area of research.

Booklet 37 of the Griffith University Research Ethics Manual now incorporates this three-part test.

In the subscribers’ area you will find a suggested question set for the three-part test, as well as a graphic overview of the work flow for the questions.

It is recommended institutions adopt their own version of the test, including policy positions with regard to the use of hacked or scraped data, or the research use of material in a manner at odds with a site/platform’s rules.

References

Australian agency to probe Facebook after shocking revelation – The New Daily. Accessed 16/11/19 from https://thenewdaily.com.au/news/world/2018/04/05/facebook-data-leak-australia/

Australian Bureau of Statistics (2018) 8153.0 – Internet Activity, Australia, June 2018. Retrieved from https://www.abs.gov.au/ausstats/abs@.nsf/mf/8153.0/ (accessed 27 September 2019)

Chamber, C. (2014 01 July) Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach? The Guardian. Accessed 16/11/19 from http://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach

Griffith University (Updated 2019) Griffith University Research Ethics Manual (GUREM). Accessed 16/11/19 from https://www.griffith.edu.au/research/research-services/research-ethics-integrity/human/gurem

McCook, A. (2016 16 May) Publicly available data on thousands of OKCupid users pulled over copyright claim.  Retraction Watch. Accessed 16/11/19 from http://retractionwatch.com/2016/05/16/publicly-available-data-on-thousands-of-okcupid-users-pulled-over-copyright-claim/

Nielsen (2019, 26 July) TOTAL CONSUMER REPORT 2019: Navigating the trust economy in CPG. Retrieved from https://www.nielsen.com/us/en/insights/report/2019/total-consumer-report-2019/ (accessed 27 September 2019)

NHMRC (2007 updated 2018) National Statement on Ethical Conduct in Human Research. Accessed 17/11/19 from https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018

Satran, J. (2015 02 September) Ashley Madison Hack Creates Ethical Conundrum For Researchers. Huffington Post. Accessed 16/11/19 from http://www.huffingtonpost.com.au/entry/ashley-madison-hack-creates-ethical-conundrum-for-researchers_55e4ac43e4b0b7a96339dfe9?section=australia&adsSiteOverride=au

WebAlive (2019 24 June) The State of Australia’s Ecommerce in 2019 Retrieved from https://www.webalive.com.au/ecommerce-statistics-australia/ (accessed 27 September 2019).

Recommendations for further reading

Editorial (2018 12 March) Cambridge Analytica controversy must spur researchers to update data ethics. Nature. Accessed 16/11/19 from https://www.nature.com/articles/d41586-018-03856-4?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20180329

Neuroskeptic (2018 14 July) The Ethics of Research on Leaked Data: Ashley Madison. Discover. Accessed 16/11/19 from http://blogs.discovermagazine.com/neuroskeptic/2018/07/14/ethics-research-leaked-ashley-madison/#.Xc97NC1L0RU

Newman, L. (2017 3 July) WikiLeaks Just Dumped a Mega-Trove of CIA Hacking Secrets. Wired Magazine. Accessed 16/11/19 from https://www.wired.com/2017/03/wikileaks-cia-hacks-dump/

Weaver, M (2018 25 April) Cambridge University rejected Facebook study over ‘deceptive’ privacy standards. TheGuardian. Accessed 16/11/19 from https://www.theguardian.com/technology/2018/apr/24/cambridge-university-rejected-facebook-study-over-deceptive-privacy-standards

Woodfield, K (ed.) (2017) The Ethics of Online Research. Emerald Publishing. https://doi.org/10.1108/S2398-601820180000002004

Zhang, S. (2016 20 May ) Scientists are just as confused about the ethics of big-data research as you. Wired Magazine. Accessed 16/011/19 from http://www.wired.com/2016/05/scientists-just-confused-ethics-big-data-research/

Competing interests

Gary is the principal author of the Griffith University Research Ethics Manual (GUREM) and receives a proportion of license sales.

This post may be cited as:
Allen, G. (23 November 2019) The research use of online data/web 2.0 comments. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-research-use-of-online-data-web-2-0-comments

The F-word, or how to fight fires in the research literature0

 

Professor Jennifer Byrne | University of Sydney Medical School and Children’s Hospital at Westmead

 

At home, I am constantly fighting the F-word. Channelling my mother, I find myself saying things like ‘don’t use that word’, ‘not here’, ‘not in this house’. As you can probably gather, it’s a losing battle.

Research has its own F-words – ‘falsification’, ‘fabrication’, different colours of the overarching F-word, ‘fraud’. Unlike the regular F-word, most researchers assume that there’s not much need to use the research versions. Research fraud is considered comfortably rare, the actions of a few outliers. This is the ‘bad apple’ view of research fraud – that fraudsters are different, and born, not made. These rare individuals produce papers that eventually act as spot fires, damaging their fields, or even burning them to the ground. However, as most researchers are not affected, the research enterprise tends to just shrug its collective shoulders, and carry on.

But, of course, there’s a second explanation for research fraud – the so-called ‘bad barrel’ hypothesis – that research fraud can be provoked by poorly regulated, extreme pressure environments. This is a less comfortable idea, because this implies that regular people might be tempted to cheat if subjected to the right (or wrong) conditions. Such environments could result in more affected papers, about more topics, published in more journals. This would give rise to more fires within the literature, and more scientific casualties. But again, these types of environments are not considered to be common, or widespread.

But what if the pressure to publish becomes more widely and acutely applied? The use of publication quotas has been described in different settings as being associated with an uptick in numbers of questionable publications (Hvistendahl 2013; Djuric 2015; Tian et al. 2016). When publication expectations harden into quotas, more researchers may feel forced to choose between their principles and their (next) positions.

This issue has been recently discussed in the context of China (Hvistendahl 2013; Tian et al. 2016), a population juggernaut with scientific ambitions to match. China’s research output has risen dramatically over recent years, and at the same time, reports of research integrity problems have also filtered into the literature. In biomedicine, these issues again have been linked with publication quotas in both academia and clinical medicine (Tian et al. 2016). A form of contract cheating has been alleged to exist in the form of paper mills, or for-profit organisations that provide research content for publications (Hvistendahl 2013; Liu and Chen 2018). Paper mill services allegedly extend to providing completed manuscripts to which authors or teams can add their names (Hvistendahl 2013; Liu and Chen 2018).

I fell into thinking about paper mills by accident, as a result of comparing five very similar papers that were found to contain serious errors, questioning whether some of the reported experiments could have been performed (Byrne and Labbé 2017). With my colleague Dr Cyril Labbé, we are now knee deep in analysing papers with similar errors (Byrne and Labbé 2017; Labbé et al. 2019), suggesting that a worrying number of papers may have been produced with some kind of undeclared help.

It is said that to catch a thief, you need to learn to think like one. So if I were running a paper mill, and wanted to hide many questionable papers in the biomedical literature, what would I do? The answer would be to publish papers on many low-profile topics, using many authors, across many low-impact journals, over many years.

In terms of available topics, we believe that the paper mills may have struck gold by mining the contents of the human genome (Byrne et al. 2019). Humans carry 40,000 different genes of two main types, the so-called coding and non-coding genes. Most human genes have not been studied in any detail, so they provide many publication opportunities in fields where there are few experts to pay attention.

Human genes can also be linked to cancer, allowing individual genes to be examined in different cancer types, multiplying the number of papers that can be produced for each gene (Byrne and Labbé 2017). Non-coding genes are known to regulate coding genes, so non-coding and coding genes can also be combined, again in different cancer types.

The resulting repetitive manuscripts can be distributed between many research groups, and then diluted across the many journals that publish papers examining gene function in cancer (Byrne et al. 2019). The lack of content experts for these genes, or poor reviewing standards, may help these manuscripts to pass into the literature (Byrne et al. 2019). And as long as these papers are not detected, and demand continues, such manuscripts can be produced over many years. So rather than having a few isolated fires, we could be witnessing a situation where many parts of the biomedical literature are silently, solidly burning.

When dealing with fires, I have learned a few things from years of mandatory fire training. In the event of a laboratory fire, we are taught to ‘remove’, ‘alert’, ‘contain’, and ‘extinguish’. I believe that these approaches are also needed to fight fires in the research literature.

We can start by ‘alerting’ the research and publishing communities to manuscript and publication features of concern. If manuscripts are produced to a pattern, they should show similarities in terms of formatting, experimental techniques, language and/or figure appearance (Byrne and Labbé 2017). Furthermore, if manuscripts are produced in a large numbers, they could appear simplistic, with thin justifications to study individual genes, and almost non-existent links between genes and diseases (Byrne et al. 2019). But most importantly, manuscripts produced en masse will likely contain mistakes, and these may constitute an Achilles heel to enable their detection (Labbé et al. 2019).

Acting on reports of unusual shared features and errors will help to ‘contain’ the numbers and influence of these publications. Detailed, effective screening by publishers and journals may detect more problematic manuscripts before they are published. Dedicated funding would encourage active surveillance of the literature by researchers, leading to more reports of publications of concern. Where these concerns are upheld, individual publications can be contained through published expressions of concern, and/or ‘extinguished’ through retraction.

At the same time, we must identify and ‘remove’ the fuels that drive systematic research fraud. Institutions should remove both unrealistic publication requirements, and monetary incentives to publish. Similarly, research communities and funding bodies need to ask whether neglected fields are being targeted for low value, questionable research. Supporting functional studies of under-studied genes could help to remove this particular type of fuel (Byrne et al. 2019).

And while removing, alerting, containing and extinguishing, we should not shy away from thinking about and using any necessary F-words. Thinking that research fraud shouldn’t be discussed will only help this to continue (Byrne 2019).

The alternative could be using the other F-word in ways that I don’t want to think about.

References

Byrne JA (2019). We need to talk about systematic fraud. Nature. 566: 9.

Byrne JA, Grima N, Capes-Davis A, Labbé C (2019). The possibility of systematic research fraud targeting under-studied human genes: causes, consequences and potential solutions. Biomarker Insights. 14: 1-12.

Byrne JA, Labbé C (2017). Striking similarities between publications from China describing single gene knockdown experiments in human cancer cell lines. Scientometrics. 110: 1471-93.

Djuric D (2015). Penetrating the omerta of predatory publishing: The Romanian connection. Sci Engineer Ethics. 21: 183–202.

Hvistendahl M (2013). China’s publication bazaar. Science. 342: 1035–1039.

Labbé C, Grima N, Gautier T, Favier B, Byrne JA (2019). Semi-automated fact-checking of nucleotide sequence reagents in biomedical research publications: the Seek & Blastn tool. PLOS ONE. 14: e0213266.

Liu X, Chen X (2018). Journal retractions: some unique features of research misconduct in China. J Scholar Pub. 49: 305–319.

Tian M, Su Y, Ru X (2016). Perish or publish in China: Pressures on young Chinese scholars to publish in internationally indexed journals. Publications. 4: 9.

This post may be cited as:
Byrne, J. (18 July 2019) The F-word, or how to fight fires in the research literature. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/the-f-word-or-how-to-fight-fires-in-the-research-literature

Consumer Co-design for End of Life Care Discharge Project0

 

In this issue, we are publishing an account of an end-of-life project in whose design there are some features that add to its ethical interest. Many of us are familiar with institutional policies about consumer engagement in human research and have served on project reference groups, but perhaps have less experience with the successful – and ethical – implementation of these. This project may add some valuable understanding of these matters, including:
.

  • What insights do the design and information groups offer into the practice of research co-design?
  • Do those insights help to clarify the distinction between co-design and participatory action research?
  • Do those groups have advantages in demonstrating the project’s fulfilment of ethical principles of beneficence, respect or justice
  • Could those groups have a role in overseeing the ethical conduct of a project?
  • Given the subject of this research project, what sort of projects might make best use of groups such as those in this project?

We have invited the author and the research team to provide some follow-up reflection on issues such as these as the project progresses and is completed

.
The End of Life Care Discharge Planning Project is led by Associate Professor Laurie Grealish from Griffith University. This research project partners with consumers at all stages, allowing consumers significant contribution. As part of the Queensland Health End of Life Care Strategy, Gold Coast Health is developing a process to support discharge for people near end of life who would like to die at home. A Productivity Commission Report in 2017 noted that although over 70% of Australians prefer to die at home, less than 10% do. This is attributed to the need for improvement in the transition between hospital and community care.

The outcomes of this study are expected to include: (1) an evidence-based discharge process and infrastructure to enhance the transition from hospital [medical wards] to home for end of life care; (2) end of life care information brochure for patients and their family carers; (3) stakeholder feedback to indicate that the process is feasible and satisfactory; and (4) a health service and non-government organisational partnership network to monitor the discharge process and enhance future integrated models of end of life care. Ethical approval has been granted by the Gold Coast Health Human Research Ethics Committee and Griffith University Human Research Ethics Committee.

For the research design stage, three groups were established: 1) Project reference group, 2) Project design group, and, 3) Project information group.

1. Project reference group – The aim of the project reference group is to consider the analysed data and reports from the sub-committees, provide advice on, as well as monitor, implementation strategies. This group is led by Associate Professor Laurie Grealish and has membership from a wide range of stakeholders including hospital clinicians and managers, researchers, community groups, non-government organisations and consumers.

2. Project design group – The purpose of this group is to design an evidence-based discharge process to enable people near the end of life to return home to die if this is their wish. Dr Kristen Ranse from Griffith University is the Chair of this group and the membership of the group includes representatives from Gold Coast Health, consumers, and non-government organisations.

3. Project information group – Led by Dr Joan Carlini from Griffith University, this group is to provide expert advice about what information people need as they consider dying at home. It was identified early by the group that there is an overwhelming amount of information available online and in brochures, leading to confusion. Since this group has stakeholders from a wide range of representatives from health care providers, nongovernment organisations, community groups as well as consumers, there has been a healthy generation of discussions. The consumers on the team led the way in selecting pertinent information and producing a draft document. This was then further modified by the committee, ensuring that the booklet is concise, but also a thorough source of information for end of life care.

The next stage of the project runs from Janulary to July 209, with implementation, data collection and anlaysis, and dissemination of finding.

Contributor
Dr. Joan Carlini, Lecturer, Department of Marketing, Griffith University | Griffith University profile, LinkedIn profile (log in required), Twitter – @joancarlini |

This post may be cited as:
Carlini, J. (18 January 2018) Consumer Co-design for End of Life Care Discharge Project. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/consumer-co-design-for-end-of-life-care-discharge-project

0