ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Researcher responsibilities

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

It’s the hand you’re dealt: Copyright card games and publishing board games are in!0

Posted by Admin in Research Integrity on December 21, 2019 / Keywords: , , ,
 

Nerida Quatermass | University Copyright Officer | Project Manager, Creative Commons Australia at Queensland University of Technology

As a university copyright officer, I provide copyright information for research and scholarly communication – from ethics applications to publication.

What’s up, Doc?

Copyright questions can often be a manifestation of a larger issue than copyright. For example, a question about the mining or use of Twitter posts while involving third party copyright is also a matter of contract – what use of content is allowed under platform terms. Alternatively, the question might be about copyright, but it’s one where the law doesn’t provide the answer – does the scope of the fair dealing for research exception extend to publication? These types of enquiries illustrate that researchers need to understand copyright and a range of related issues relevant to research and communication.]

Myth-busting

Couple these uncertainties with the fact that there is no harmony in copyright laws between jurisdictions in a global research and communication community, and it means there are sure to be some persistent copyright myths to de-bunk in order to understand what is allowed. For example, the concept of “fair use” of copyright is well known globally and researchers in Australia often ask if the use they want to make of third-party copyright is a “fair use”. They are not aware that they cannot rely on it in Australia and are not generally aware of the “fair dealing” provisions that are available to them. Misinformation combined with limited confident knowledge about re-use rights leave researchers confused and anxious about copyright matters.

Back to basics

The savvy 21st century researcher needs some basic copyright knowledge to feel confident to manage their own copyright, their use of third-party copyright, and related publication matters. Researchers have always been required by traditional publishers to manage copyright, but today funder and institutional requirements for Open Access require a level of knowledge about open licensing and the effect of a Creative Commons licence on communication and reuse.\

Out with the old

Copyright is a pretty dry topic. At Queensland University of Technology, within the Research Support Team I am a member of, a wide range of copyright guidance is available including self-help, workshop and direct enquiry. When we “teach” in traditional workshops I am not confident that transferrable learning occurs in a way that will enable future decision making. In part, I put this down to a lack of engagement in traditionally-delivered workshops and seminars.

Making a game of it

Game play has benefits to adult learning, and this is a direction that copyright education has gone in. The UK Copyright Literacy organisation mantra is “decoding copyright and bringing you enlightenment”. Jane Secker and Chris Morrison (2016) have led the way by creating games which are played in workshops. They have found that the interactivity of a games situation engages learners in training, but is also a drawcard to attend.  Chris and Jane have created two games: Copyright the Card Game which teaches the basics of copyright law and application; and The Publishing Trap which facilitates informed decision making for the research lifecycle including IP and copyright.  Following suit and inspired by this, Tohatoha – Aotearoa New Zealand’s peak open advocacy body have released Creative Commons Release ‘Em Poker – a poker style card game about Creative Commons licensing. This game is correct for all jurisdictions because CC licences are global.

Back to the thorn that is jurisdictional copyright, this year I worked with the Australian Libraries Copyright Committee and a number of librarians to localise Copyright the Card Game to Australian copyright law. The resulting Copyright the Card Came: Australian Edition is correct for Australian law; and it has an Australian look and feel to it.

The proof is in the pudding

This year, Australian librarians and copyright officers have played Copyright the Card Came: Australian Edition in workshops, and professional development programs and at conferences. The feedback has been very positive. The interactive environment and scenario-based play is a positive contribution to learning which has made the copyright workshop a much more enjoyable prospect for teachers and learners.

If you are interested in playing, ask your librarian or copyright officer if they can organise it. Alternatively, all the resources including the card deck and workshop presentation are available online.

A beautiful deck of Creative Commons Release ‘Em Poker cards can be purchased online. Copyright: The Card Game and The Publishing Trap resources can be printed from the websites below.

Copyright the Card Came: Australian Edition

Creative Commons Release ‘Em Poker

The Publishing Trap

Reference

Secker, Jane and Morrison, Chris (2016) Copyright education and training. In: Copyright and E-learning: a Guide for Practitioners. Facet Publishing, London, UK, pp. 211-238. http://eprints.lse.ac.uk/67926/1/Secker_Copyright%20education_2016.pdf

This post may be cited as:

Quatermass, N. (21 December 2019) It’s the hand you’re dealt: Copyright card games and publishing board games are in! Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/its-the-hand-youre-dealt-copyright-card-games-and-publishing-board-games-are-inhttps://ahrecs.com/research-integrity/its-the-hand-youre-dealt-copyright-card-games-and-publishing-board-games-are-in

Inclusion of Culturally and Linguistically Diverse populations in Clinical Trials:0

 

Nik Zeps
AHRECS Consultant

Clinical trials have enormous value to society as they provide the most robust means of working out whether or not particular treatments used to improve the health of our population work or not. Governments have a stated objective to increase participation in clinical trials based upon a series of assumptions that extend beyond their utility as a means to derive the highest level of reliable evidence about the efficacy and safety of interventions. One of these is that those people who are included derive a tangible benefit from doing so. Whilst this may not be true in all cases, after all up to 50% of people may receive an inferior treatment by definition, there is the potential for people to derive individual benefit, and it is often stated that those involved in a trial receive a higher standard of care than those not included. Certainly, the additional testing and closer scrutiny of people on a trial may equate in some instances to better care, but this should not be seen as a major driver as it could be argued that equitable care should be available as a universal right. A less discussed benefit is the connectedness and satisfaction that people may derive from making a tangible contribution to society through participation in clinical research. Furthermore, there may be indeterminate peer group benefits even if an individual does not benefit.

In an Australian study Smith et al (1) found that CALD people whose preferred language was not-English (PLNE) had the lowest participation rates in clinical trials. Whilst CALD people whose preferred language was English (PLE) had greater levels of enrollment than the PLNE group, they were still underrepresented by population. This has been described across the world and is identified as a pressing concern (2).  Understanding why this is the case is important for a number of reasons. In multiculturally diverse countries like Australia, testing interventions where a significant proportion of the population are not included could result in evidence that is not applicable to those people. This spans across biological differences which may be relevant to drug efficacy or toxicity through to interventions such as screening that may fail to be useful in those populations. Where there is evidence that participation in a clinical trial may present specific advantages there is also the issue of injustice through exclusion of a particular group or groups of persons. Certainly, from an implementation perspective, not including a diverse group of participants and analyzing for cultural and behavioral acceptability may mean that even if an intervention has merit it fails to be taken up.

The reasons for non-inclusion are likely more complex than those of language barriers, although having protocols for clinical trials that specifically exclude people who don’t have higher levels of proficiency in English do not help. It would seem that the language barrier could be soluble through providing greater resources to enable translation services, particular in areas with a clear need for this. Certainly, multi-national trials already have PICFs in multiple languages and these could be readily deployed through use of innovative technologies including eConsent processes.[1] Funders of clinical trials could make it a requirement for such inclusivity and back it up through provision of specific funding for this in any grants they award. Legal means to enforce this, whilst possible, are unlikely to drive systemic change and could have the unintended consequence of making it harder to do any trials at all in an environment already subject to extreme financial pressures.

However, a major reason for low levels of participation in clinical trials may be attributed to equity of access to clinical services in the first place. It is hard to recruit people from the general population into clinical trials, but even harder if specific members of the population don’t come to the health service in the first place. There is relatively little research on this topic and it would seem logical to do this as a priority in parallel with examining why people fail to participate in clinical trials due to language barriers. Perhaps clinical trials are simply the canary alerting us to broader inequities that need greater research and investment. Research into solutions to these inequities is accordingly a priority and may solve clinical trial participation rates as a consequence.

References

  1. Smith A, Agar M, Delaney G, Descallar J, Dobell-Brown K, Grand M, et al. Lower trial participation by culturally and linguistically diverse (CALD) cancer patients is largely due to language barriers. Asia Pac J Clin Oncol. 2018;14(1):52-60.
  2. Clark LT, Watkins L, Pina IL, Elmer M, Akinboboye O, Gorham M, et al. Increasing Diversity in Clinical Trials: Overcoming Critical Barriers. Curr Probl Cardiol. 2019;44(5):148-72.

Nik Zeps participated in the CCV forum at the COSA ASM. A full report of the workshop and research by the CCV and MCCabe centre is forthcoming.

[1] https://ctiq.com.au/wp-content/uploads/eConsent-in-Clinical-Trials-compressed.pdf

This post may be cited as:

Zeps, N. (4 December 2019) Inclusion of Culturally and Linguistically Diverse populations in Clinical Trials. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/inclusion-of-culturally-and-linguistically-diverse-populations-in-clinical-trials

The research use of online data/web 2.0 comments0

 

Does it require research ethics review and specified consent?

Dr Gary Allen
AHRECS Senior Consultant

The internet is a rich source of information for researchers. On the Web 2.0 we see extensive commentary on numerous life matters, which may be of interest to researchers in a wide range of (sub)disciplines. Research interest in these matters frequently prompts the following questions –Can I use that in my project? Hasn’t that already been published? Is research ethics review required? Is it necessary to obtain express consent for the research use?

It’s important to recognise that these questions aren’t posed in isolation. Cases like the OkCupid data scraping scandal, the Ashley Madison hack, Emotional Contagion, Cambridge Analytica and others provide a disturbing context.  At a time when the use of the internet and social media is startingly high (Nielsen 2019, Australian Bureau of Statistics 2018, commentaries such as the WebAlive blog 2019), there is also significant distrust of the platforms people are using. Consequently, there are good reasons for researchers and research ethics reviewers to be cautious about use of existing material for research, even if the terms and conditions of a site/platform specifically discuss research.

Like many ethics questions, there isn’t a single simple answer that is correct all the time.  The use of some kinds of data for research may not meet the National Statement’s definition of human research. Use of other kinds of data may meet that definition but will be exempt from review and so not require explicit consent. Use of other kinds of data or other uses of data that involves no more than low risk can be reviewed outside an HREC meeting and others will actually have to be considered at an HREC meeting.

AHRECS proposes a three-part test, which can be applied to individual projects to test whether a proposed use of internet data is human research and needs ethics review and this will also guide whether explicit and project-specific consent is required. If this test is formally adopted by an institution and by its research ethics committees, it would provide a transparent, consistent, and predictable way to judge these matters.

You can find a word copy of the questions, as well as a png and pdf copy of the flow diagram in our subscribers’ area.
.

For institutions
https://ahrecs.vip/flow…
$350/year
.

For individuals
https://www.patreon.com/posts/flow…
USD10/month
.

 

For any questions email enquiry@ahrecs.com

Part One of this test is whether the content of a site or platform is publicly available. One component of this test is whether the researcher will be using scraping, spoofing or hacking of the site/platform to obtain information.
.

Part Two of the test relates to whether individuals have consented and will be reasonably identifiable from the data and its proposed research use and whether there are risks to those individuals.  A component of this test is exploring whether an exemption from the consent requirement is necessary (i.e. as provided for by paragraphs 2.3.9 -12 of the National Statement and are lawful under any privacy regulation that applies).

Part Three of the test relates to how the proposed project relates to the national human research ethics guidelines – the National Statement – and whether there are any matters that must be considered by a human research ethics committee.  For example, Section 3 of the National Statement (2007 updated 2018) discusses some methodological matters and Section 4 some potential participant issues that must be considered by an HREC.

Individually, any one of these parts could determine that review and consent is required. But meeting all three parts of the test is necessary to indicate that the work is exempt before a project can be exempted from review.

Even if the tests indicate review/consent is required, that doesn’t mean the research is ethically problematic, just a project requires for more due consideration.

The implication of this is that not all research based upon online comments or social media posts can be exempted from review but, conversely, not all such work must be ethically reviewed.  The approach that should be taken depends upon project-specific design matters.  A strong and justifiable institutional process will have nuanced criteria on these matters.  Failing to establish transparent and predictable policies would be a serious lapse in an important area of research.

Booklet 37 of the Griffith University Research Ethics Manual now incorporates this three-part test.

In the subscribers’ area you will find a suggested question set for the three-part test, as well as a graphic overview of the work flow for the questions.

It is recommended institutions adopt their own version of the test, including policy positions with regard to the use of hacked or scraped data, or the research use of material in a manner at odds with a site/platform’s rules.

References

Australian agency to probe Facebook after shocking revelation – The New Daily. Accessed 16/11/19 from https://thenewdaily.com.au/news/world/2018/04/05/facebook-data-leak-australia/

Australian Bureau of Statistics (2018) 8153.0 – Internet Activity, Australia, June 2018. Retrieved from https://www.abs.gov.au/ausstats/abs@.nsf/mf/8153.0/ (accessed 27 September 2019)

Chamber, C. (2014 01 July) Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach? The Guardian. Accessed 16/11/19 from http://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach

Griffith University (Updated 2019) Griffith University Research Ethics Manual (GUREM). Accessed 16/11/19 from https://www.griffith.edu.au/research/research-services/research-ethics-integrity/human/gurem

McCook, A. (2016 16 May) Publicly available data on thousands of OKCupid users pulled over copyright claim.  Retraction Watch. Accessed 16/11/19 from http://retractionwatch.com/2016/05/16/publicly-available-data-on-thousands-of-okcupid-users-pulled-over-copyright-claim/

Nielsen (2019, 26 July) TOTAL CONSUMER REPORT 2019: Navigating the trust economy in CPG. Retrieved from https://www.nielsen.com/us/en/insights/report/2019/total-consumer-report-2019/ (accessed 27 September 2019)

NHMRC (2007 updated 2018) National Statement on Ethical Conduct in Human Research. Accessed 17/11/19 from https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018

Satran, J. (2015 02 September) Ashley Madison Hack Creates Ethical Conundrum For Researchers. Huffington Post. Accessed 16/11/19 from http://www.huffingtonpost.com.au/entry/ashley-madison-hack-creates-ethical-conundrum-for-researchers_55e4ac43e4b0b7a96339dfe9?section=australia&adsSiteOverride=au

WebAlive (2019 24 June) The State of Australia’s Ecommerce in 2019 Retrieved from https://www.webalive.com.au/ecommerce-statistics-australia/ (accessed 27 September 2019).

Recommendations for further reading

Editorial (2018 12 March) Cambridge Analytica controversy must spur researchers to update data ethics. Nature. Accessed 16/11/19 from https://www.nature.com/articles/d41586-018-03856-4?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20180329

Neuroskeptic (2018 14 July) The Ethics of Research on Leaked Data: Ashley Madison. Discover. Accessed 16/11/19 from http://blogs.discovermagazine.com/neuroskeptic/2018/07/14/ethics-research-leaked-ashley-madison/#.Xc97NC1L0RU

Newman, L. (2017 3 July) WikiLeaks Just Dumped a Mega-Trove of CIA Hacking Secrets. Wired Magazine. Accessed 16/11/19 from https://www.wired.com/2017/03/wikileaks-cia-hacks-dump/

Weaver, M (2018 25 April) Cambridge University rejected Facebook study over ‘deceptive’ privacy standards. TheGuardian. Accessed 16/11/19 from https://www.theguardian.com/technology/2018/apr/24/cambridge-university-rejected-facebook-study-over-deceptive-privacy-standards

Woodfield, K (ed.) (2017) The Ethics of Online Research. Emerald Publishing. https://doi.org/10.1108/S2398-601820180000002004

Zhang, S. (2016 20 May ) Scientists are just as confused about the ethics of big-data research as you. Wired Magazine. Accessed 16/011/19 from http://www.wired.com/2016/05/scientists-just-confused-ethics-big-data-research/

Competing interests

Gary is the principal author of the Griffith University Research Ethics Manual (GUREM) and receives a proportion of license sales.

This post may be cited as:
Allen, G. (23 November 2019) The research use of online data/web 2.0 comments. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-research-use-of-online-data-web-2-0-comments

Pondering on whether to submit your research output to a journal?0

 

The significance of how we talk and think about the pachyderm elephant mammoth in the room.

Dr Gary Allen
AHRECS Senior Consultant

The names we give things matter. The Bard may have been willing to allow a rose to stand in place for any noun, but he hadn’t encountered unscrupulous publishers.

Thanks to Beall’s List, over the last few years we may have been ready to declare that an unscrupulous journal was predatory. Prior to early 2017, many of us defaulted to Beall’s List to label a journal and its publisher as being naughty or nice.

In general, a predatory journal is one that claims an editorial board, impact factor and quality assurance process it doesn’t actually have and is far more focussed on fast profits than a meaningful contribution to scholarly wisdom. Often predatory journals come within suites belonging to a predatory publisher. Other dubious behaviours of these unscrupulous publishers include:

  1. Listing eminent/influential editors who don’t actually have any involvement or association with the publication (and refusing to remove names when challenged).
  2. Styling their website after a reputable publisher or using a very similar journal title in the hopes of tricking the unwary.
  3. Offering to add undeserving co-authors to a publication… for a price.

Chances are our professional development workshops during this time would have been loaded with tips on how to spot a predatory journal, to be suspicious of unsolicited emails from publishers, and to be aware publishing with a predatory publisher could be a costly mistake (Eve and Priego 2017).

But credible voices started to ask whether we should pay heed to blacklists (Neylo 2017) and that Beall’s List hadn’t been without its problems (Swauger 2017). The difficulty is that blacklists tend to be conservative and can privilege established ways of doing business. There are quality open access publishers using non-traditional editorial and author-pays models and ‘traditional’ publishers whose business practices may not be that friendly to good academic practice.

After Beall’s List disappeared, we were all given good reason to reflect upon where not to publish. I was co-author of an earlier post on this topic (Israel and Allen 2017).

Over the last few years, it has become clear the relationship between questionable publishers and researchers was more complex than a predator/prey dichotomy where hapless authors were being tricked by unscrupulous publishers (submitting a paper to them because they were fooled by the false claims of peer review/editorial processes).

In this context, we saw: commentary that pointed to researchers publishing with predatory publishers was not limited to the global South (Oransky 2017); educational materials produced by the Committee on Publication Ethics; peak funding bodies urging grant-recipients to stay away from illegitimate publishers (Lauer 2017); the growth of predatory conferences (Cress 2017), and institutions treating as fraud the use of such publications in applications (Campanile & Golding 2017). I wrote about this shift in an earlier post in the Research Ethics Monthly (Are we missing the true picture? Stop calling a moneybox a fishing hook).

Recently we have been noting how ‘junk science’ disseminated by questionable publishers is hurting research (Gillis 2019), is undermining public trust in research (Marcus 2019), is underpinning claims by climate change denialists and the anti-vaccine movement based on ‘alternative facts’, and is something selection committees should be aware of (Flaherty 2019). The toxic effects of dodgy publications have been described as citation pollution (Hinchliffe & Michael Clarke 2019, Beach 2019).

AHRECS recommends professional development efforts be updated again. The content discussed above should be retained but added to it should be a call for us all to safeguard the integrity and trustworthiness of science by creating an environment within which the incentive for our colleagues to use dodgy publication outlets is diminished.

In the subscribers’ areas you will find a short template ppt about this topic (which you can modify and use) and an AHRECS branded version with embedded audio by Professor Mark Israel. To access the subscribers’ area for institutions go to https://www.ahrecs.vip and for individuals go to https://www.patreon.com/ahrecs

References

Allen, G. (26 October 2018) Are we missing the true picture? Stop calling a moneybox a fishing hook. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/are-we-missing-the-true-picture-stop-calling-a-moneybox-a-fishing-hook

Beach, R. (2019  28 October) Citation Contamination: References to Predatory Journals in the Legitimate Scientific Literature. Scholarly Kitchen (Rick Anderson | October 2019). Retrieved from: https://scholarlykitchen.sspnet.org/2019/10/28/citation-contamination-references-to-predatory-journals-in-the-legitimate-scientific-literature

Eve, P. M. & Priego E. (2017) Who is Actually Harmed by Predatory Publishers? Journal for a Global Sustainable Information Society. 15(2)
Publisher (Open access): http://www.triple-c.at/index.php/tripleC/article/view/867/1042

Gillis, A. (2019 09 July) The Rise of Junk Science. The Walrus. Retrieved from: https://thewalrus.ca/the-rise-of-junk-science/

Hinchliffe, L. J. & Clarke, M. (2019 25 September) Fighting Citation Pollution — The Challenge of Detecting Fraudulent Journals in Works Cited. Scholarly Kitchen. Retrieved from: https://scholarlykitchen.sspnet.org/2019/09/25/fighting-citation-pollution/

Israel M. & Allen G. (2017 26 July) In a world of hijacked, clone and zombie publishing, where shouldn’t I publish? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/world-hijacked-clone-zombie-publishing-shouldnt-publish

Lauer, M. (2017) Continuing Steps to Ensuring Credibility of NIH Research: Selecting Journals with Credible Practices. Extramural Nexus. Retrieved from: https://nexus.od.nih.gov/all/2017/11/08/continuing-steps-to-ensuring-credibility-of-nih-research-selecting-journals-with-credible-practices/

Marcus, A (2019 09 January) Oft-quoted paper on spread of fake news turns out to be…fake news. Retraction Watch. Retrieved from: https://retractionwatch.com/2019/01/09/oft-quoted-paper-on-spread-of-fake-news-turns-out-to-befake-news/

Neylon, C. (2017 29 January) Blacklists are technically infeasible, practically unreliable and unethical. Period. LSE Impact Blog. Retrieved from: https://cameronneylon.net/blog/blacklists-are-technically-infeasible-practically-unreliable-and-unethical-period/

Oransky, I. (2017) Predatory journals: Not just a problem in developing world countries, says new Nature paper. Retraction Watch. Retrieved from: http://retractionwatch.com/2017/09/06/predatory-journals-not-just-developing-world-countries-says-new-nature-paper/

Swauger, S. (2017) Open access, power, and privilege. College & Research Libraries News. 78(11)
Publisher (Open Access): http://crln.acrl.org/index.php/crlnews/article/view/16837/18434

This post may be cited as:
Allen, G. (31 October 2019) Pondering on whether to submit your research output to a journal?. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/pondering-on-whether-to-submit-your-research-output-to-a-journal

0