ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

Beneficence

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Towards a code of conduct for ethical post-disaster research0

 

JC Gaillard
School of Environment, The University of Auckland, New Zealand
Unit for Environmental Sciences and Management, North-West University, South Africa
Profile | jc.gaillard@auckland.ac.nz

Lori Peek
Department of Sociology and Natural Hazards Center, University of Colorado Boulder, USA
Profile | Lori.Peek@colorado.edu

.
We recently called for a code of conduct in disaster research. This call is rooted in our respect for the research process itself and our care for affected people and the researchers who do this work. To be clear, we are calling for a cross-disciplinary conversation to advance a shared set of moral and ethical principles to help guide what we study, who we study, how we conduct studies, and who is involved in the research process itself. We are not arguing for another layer of bureaucratic or regulatory oversight such as those required in some countries by internal review boards and ethics committees. Our hope is that such a discussion will launch first within focused academic and policy meetings, before it can be scaled up to the regional and eventually international levels.

Our intent is to prompt further reflection and conversation around the following three possibilities for ensuring that disaster scholarship is relevant, fair, and ethically sound.

First, it is essential that research has a clear purpose that is rooted in present knowledge gaps and emergent context-specific research priorities in the disaster aftermath. The collaborative work that happens before disaster and in the immediate aftermath can help clarify the focus of research studies and ensure that the knowledge generated is locally-relevant and hence more likely to effectively inform response, recovery and future disaster risk reduction efforts.

Second, ensuring that research is filling relevant knowledge gaps requires that local voices be put at the forefront of the research effort. Local voices may include a range of perspectives, including those of local researchers and those who hail from elsewhere but hold deep knowledge of the places and people affected by disasters. They also comprise those voices of the survivors whose ability to deal with the event and contribute to the recovery effort is central to rebuilding damaged physical infrastructure as well as people’s lives and livelihoods. Ensuring that local researchers and survivors are in the driving seat does not exclude outside researchers when prompted by local colleagues. In many instances, outside scholars have access to a wide range of resources (e.g., equipment, funding, time) that may be unavailable locally in times of collective hardship. Crucial, though, is that local researchers have the opportunity to lead efforts associated with research design, data collection and analysis, and ultimately the sharing of findings.

Third, it is crucial that research agendas and projects launched in the disaster aftermath be ethically coordinated and involve locals and outsiders. This means that local researchers need to be identified quickly after disaster—the National Science Foundation-supported Extreme Events Research and Reconnaissance networks have already jump-started these efforts. There are many other organizations and networks globally that have advanced new methods for identifying researchers and communicating creatively in the disaster aftermath through virtual forums and virtual reconnaissance efforts that allow for a wider range of researchers to connect, communicate, and ultimately collaborate.

Engaging with the three aforementioned areas of possibility is crucial given the rising number of disasters and disaster studies. It is clear that disasters stir the interest of researchers, as evidenced by the growing number of academic publications on the topic. Most of these researchers are driven by a genuine desire to contribute to reducing suffering, but researching disasters can be difficult and there is not a clear ethical playbook for how to proceed.

This becomes especially pressing because researching disasters entails navigating a complex and sensitive environment where survivors may struggle with both the consequences of the event and the task of recovering. Meanwhile, local and outside responders try to support the relief and recovery effort. To fully grasp the complexity of the situation, researchers need to be equipped with an appropriate ethical toolkit that goes beyond the requirements of the research ethics committees of universities and other research institutions. It entails a nuanced understanding of the cultural, social, economic and political context wherein disasters unfold. For scholars who choose to work in new contexts following disasters, this sort of competence is difficult to acquire ad-hoc and in a short span of time.

With these challenges in mind, it remains a dominant pattern after major disasters that outside researchers converge and lead studies conducted in locations beyond their familiar cultural environment. In fact, disaster studies are often driven by scholars located in Northern America, Europe, East Asia, and Australasia. A review of publications on disasters over the past four decades shows that there are fewer researchers publishing studies from Africa, South and Southeast Asia, the Pacific, and Latin America although these regions of the world are those where disasters claim more lives and occur more frequently.

Such unequal power relationships in terms of who leads, conducts, and communicates research on disasters influences how disaster scholarship is framed and approached on the ground. Disaster studies are largely informed by Western ontologies and epistemologies that do not necessarily reflect local worldviews and ways of generating knowledge, which means that implications for policy and practice may be misleading.

Identifying these gaps opens up the possibility for reconsidering some of the fundamental assumptions about how research is conducted and ultimately how knowledge is generated and shared. Our call for a code of conduct is about ensuring that ethical concerns have the same primacy as our research questions. We look forward to continuing the conversation.

This post may be cited as:
Gaillard, JC. & Peek, L.  (21 March 2020) Towards a code of conduct for ethical post-disaster research. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/towards-a-code-of-conduct-for-ethical-post-disaster-research

Conversations with an HREC: A Researcher’s perspective0

 

Dr Ann-Maree Vallence and Dr Hakuei Fujiyama
College of Science, Health, Engineering and Education, Murdoch University, Perth, Australia
http://profiles.murdoch.edu.au/myprofile/ann-maree-vallence/
http://profiles.murdoch.edu.au/myprofile/hakuei-fujiyama/

In our careers to date, we have had many formal conversations with members of HRECs across different institutions regarding human research ethics applications and amendments. We have also had many informal conversations with members of HRECs regarding standard operating procedures in the labs we have worked in. In this article, we share our experience engaging with our HREC in a different context, specifically, formal negotiations with our HREC following an adverse incident that occurred during our data collection for one of our projects.

To provide some context, our research often uses non-invasive brain stimulation techniques including transcranial magnetic stimulation (TMS). TMS has been commonly used in research since the mid-1980s, and is considered safe, non-invasive, and painless. TMS involves a brief, high-current electrical pulse delivered through a handheld coil placed over the scalp, which induces a magnetic field that passes through the scalp and skull with little attenuation. The magnetic field induces current flow in the underlying brain tissue, and if the stimulation is sufficiently intense, it will activate the underlying brain cells providing a measure of brain excitability [1, 2]. There are published international guidelines for the safe use of TMS [3, 4] that are used to design the experiments and screen for contraindications to TMS (for example it is routine to exclude any persons who have a history of epilepsy, metal implants in the skull, or cardiac pacemakers). Nonetheless, research using TMS involves a small but finite risk. Occasionally, research participants experience a mild and temporary headache, nausea, muscular problems, dizziness, or fainting during or after TMS.

In a 12-month period in 2017, we experienced three adverse incidents: three participants in our research projects using TMS fainted#. As mentioned above, TMS studies involve a small but known risk of fainting. There have been some reports of syncope in the literature [5-7]. It is proposed that anxiety and exposure to a novel stimulus are likely responsible for fainting in the context of TMS [3, 5-7], however it is not possible to determine whether fainting or syncope is a secondary effect of an emotional response or a direct effect of the TMS on the nervous system.

It was following the reporting of these adverse events that we found ourselves in formal conversations with our HREC as well as informal interactions with several members of the HREC. There were two key steps involved in these conversations worth outlining. First, we invited the members of the HREC to visit the lab and attend a lab meeting in which we were discussing the adverse events. This engagement with the members of the HREC in our lab environment was a mutually beneficial exercise: it helped researchers to fully understand the concerns of the HREC and helped the members of the HREC to better understand our research procedures and aims, and observe our commitment to minimizing the risks associated with our research.

Second, we scrutinised our standard operating procedures to determine what changes we could make to minimize the risk of another adverse event. As outlined above, fainting during a TMS experiment is highly likely to be related to a psycho-physical response, although we cannot rule out the possibility that it is due to a direct effect of TMS on the nervous system. Following the adverse incidents, we have made several changes to our procedures. First, and perhaps most importantly, we send our potential participants a short video so they can see a typical experiment before they enter the lab. Second, when participants come into the lab we ask them if they have had any substantial change to their routine (for example sleep pattern, medication) feel stressed by factors independent of the research, and if they have had food and water in the preceding few hours (we have snacks and water in the lab if participants haven’t eaten). Third, we made changes to our lab setting such as moving to a modern, clinical testing room which was larger and brighter than the old testing room. Fourth, we take time to explain all of the equipment in the lab, not just the equipment being used in that particular experimental session.

Since the implementation of the changes to our standard operating procedures, we have not experienced an adverse event. The entire process of conversing both formally and informally with the HREC has led to improved written communication of our research to potential participants and HREC in the form of new project applications. Additionally, the process led to the development of resources for members of the lab, such as evolving standard operating procedures and a formal (compulsory) lab induction, and resources for potential participants, such as the communication of study information via a combination of written, video, and photo formats. Importantly, the implemention of revised procedures not only improved the safety profile of our experiments, but also it brought us in a better position to conduct high-quality research by enriching our resources in training lab members, communications with participants, and experience in engaging with HRECs. So, what did we learn from our conversation with an HREC? The processes of conversing with the HREC in the context of an adverse event is beneficial and needn’t wait for an adverse event to occur!

#In a 12-month period in 2017”, note that these are the only fainting incidents that we experienced since we have started our role at MU in 2015

References:

1.         Barker AT, Jalinous R and Freeston IL, Non-invasive magnetic stimulation of human motor cortex. Lancet, 1985. 1(8437): p. 1106-7.

2.         Hallett M, Transcranial magnetic stimulation: a primer. Neuron, 2007. 55(2): p. 187-99.

3.         Rossi S, Hallett M, Rossini PM and Pascual-Leone A, Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and research. Clin Neurophysiol, 2009. 120(12): p. 2008-39.

4.         Rossi S, Hallett M, Rossini PM and Pascual-Leone A, Screening questionnaire before TMS: An update. Clinical Neurophysiology, 2011. 122(8): p. 1686-1686.

5.         Kirton A, Deveber G, Gunraj C and Chen R, Neurocardiogenic syncope complicating pediatric transcranial magnetic stimulation. Pediatr Neurol, 2008. 39(3): p. 196-7.

6.         Kesar TM, McDonald HS, Eicholtz SP and Borich MR, Case report of syncope during a single pulse transcranial magnetic stimulation experiment in a healthy adult participant. Brain stimulation, 2016. 9(3): p. 471.

7.         Gillick BT, Rich T, Chen M and Meekins GD, Case report of vasovagal syncope associated with single pulse transcranial magnetic stimulation in a healthy adult participant. BMC neurology, 2015. 15(1): p. 248.

Dr Yvonne Haigh
Chair, HREC, Murdoch University. Perth Western Australian

.
In 2015, Murdoch University’s HREC received increasing numbers of applications that covered innovative approaches to cognitive neuroscience with a specific focus on TMS (Transcranial Magnetic Stimulation). The topic area covered was very new with significant levels of technical and neuroscience language. While the methods of data collection were relatively unfamiliar for the committee members, several members did undertake some broad reading in order to establish greater familiarity and understanding. However, the applications did refer to different forms of TMS which further exacerbated the committee’s hesitation. In order to establish good rapport between the researchers and the committee, we invited the researchers to present on the topic – TMS. The aim of the presenting was to provide an overview of the variations of the technology, any side effects, international benchmarks and so forth. The committee was certainly reassured with the researchers’ level of experience and expertise. Moreover, it was also apparent the researchers had a sound approach to safety and participants’ wellbeing.
.

However, over the ensuing years a range of adverse incidents occurred which involved dizzy spells and fainting in a few cases. The researchers informed the committee and put in place a range of options. The committee was invited to the laboratory to observe and experience the methods. This was particularly helpful and reassuring for the members who attended and enabled a broader discussion with those committee members who could not attend the laboratory. The Manager, Research Ethics & Integrity was also invited to attend a laboratory team meeting where the incidents were discussed, safety procedures revised, and student researchers reminded of their roles and obligations. This meeting enabled a confident report back to the HREC which was aligned with the adverse incident reports and made the committee’s task of reviewing the incidents significantly clearer.
.

These conversations and visits resulted in updated procedures (including safety) from the research leaders. This has led to clearer exclusion criteria and additional questions incorporated into the consent process to ensure any known risks are minimised. While adverse incidents are difficult, the outcome in this instance has led to building increased trust between the committee and the research team and a proactive approach from both sides to ensure that new emerging issues are discussed and resolved.
.

One of the very clear outcomes of this process has been an increased level of quality in these ethics applications which take less committee time and effort to approve.  While the technology is always evolving, and research in the area is ‘cutting edge’, the possibility that this research may change the lives of participants in these projects is evident in the researchers’ applications. From the committee’s perspective, it has been the open and respectful communication between all parties that has generated both a solid working relationship and enabled high level ethical research. The HREC’s response to a more recent ethics application reviewed since the adverse incidents described begins with the words: “The committee were impressed by the quality of this application and the careful attention to detail. The committee thank the researchers for their ongoing efforts to incorporate suggestions and advice in the collaborative effort to attain ethically strong research and positive outcomes for the community”.

This post may be cited as:
Vallence. A. and Fujiyama, H. (4 February 2020) Conversations with an HREC: A Researcher’s perspective. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/conversations-with-an-hrec-a-researchers-perspective

Inclusion of Culturally and Linguistically Diverse populations in Clinical Trials:0

 

Nik Zeps
AHRECS Consultant

Clinical trials have enormous value to society as they provide the most robust means of working out whether or not particular treatments used to improve the health of our population work or not. Governments have a stated objective to increase participation in clinical trials based upon a series of assumptions that extend beyond their utility as a means to derive the highest level of reliable evidence about the efficacy and safety of interventions. One of these is that those people who are included derive a tangible benefit from doing so. Whilst this may not be true in all cases, after all up to 50% of people may receive an inferior treatment by definition, there is the potential for people to derive individual benefit, and it is often stated that those involved in a trial receive a higher standard of care than those not included. Certainly, the additional testing and closer scrutiny of people on a trial may equate in some instances to better care, but this should not be seen as a major driver as it could be argued that equitable care should be available as a universal right. A less discussed benefit is the connectedness and satisfaction that people may derive from making a tangible contribution to society through participation in clinical research. Furthermore, there may be indeterminate peer group benefits even if an individual does not benefit.

In an Australian study Smith et al (1) found that CALD people whose preferred language was not-English (PLNE) had the lowest participation rates in clinical trials. Whilst CALD people whose preferred language was English (PLE) had greater levels of enrollment than the PLNE group, they were still underrepresented by population. This has been described across the world and is identified as a pressing concern (2).  Understanding why this is the case is important for a number of reasons. In multiculturally diverse countries like Australia, testing interventions where a significant proportion of the population are not included could result in evidence that is not applicable to those people. This spans across biological differences which may be relevant to drug efficacy or toxicity through to interventions such as screening that may fail to be useful in those populations. Where there is evidence that participation in a clinical trial may present specific advantages there is also the issue of injustice through exclusion of a particular group or groups of persons. Certainly, from an implementation perspective, not including a diverse group of participants and analyzing for cultural and behavioral acceptability may mean that even if an intervention has merit it fails to be taken up.

The reasons for non-inclusion are likely more complex than those of language barriers, although having protocols for clinical trials that specifically exclude people who don’t have higher levels of proficiency in English do not help. It would seem that the language barrier could be soluble through providing greater resources to enable translation services, particular in areas with a clear need for this. Certainly, multi-national trials already have PICFs in multiple languages and these could be readily deployed through use of innovative technologies including eConsent processes.[1] Funders of clinical trials could make it a requirement for such inclusivity and back it up through provision of specific funding for this in any grants they award. Legal means to enforce this, whilst possible, are unlikely to drive systemic change and could have the unintended consequence of making it harder to do any trials at all in an environment already subject to extreme financial pressures.

However, a major reason for low levels of participation in clinical trials may be attributed to equity of access to clinical services in the first place. It is hard to recruit people from the general population into clinical trials, but even harder if specific members of the population don’t come to the health service in the first place. There is relatively little research on this topic and it would seem logical to do this as a priority in parallel with examining why people fail to participate in clinical trials due to language barriers. Perhaps clinical trials are simply the canary alerting us to broader inequities that need greater research and investment. Research into solutions to these inequities is accordingly a priority and may solve clinical trial participation rates as a consequence.

References

  1. Smith A, Agar M, Delaney G, Descallar J, Dobell-Brown K, Grand M, et al. Lower trial participation by culturally and linguistically diverse (CALD) cancer patients is largely due to language barriers. Asia Pac J Clin Oncol. 2018;14(1):52-60.
  2. Clark LT, Watkins L, Pina IL, Elmer M, Akinboboye O, Gorham M, et al. Increasing Diversity in Clinical Trials: Overcoming Critical Barriers. Curr Probl Cardiol. 2019;44(5):148-72.

Nik Zeps participated in the CCV forum at the COSA ASM. A full report of the workshop and research by the CCV and MCCabe centre is forthcoming.

[1] https://ctiq.com.au/wp-content/uploads/eConsent-in-Clinical-Trials-compressed.pdf

This post may be cited as:

Zeps, N. (4 December 2019) Inclusion of Culturally and Linguistically Diverse populations in Clinical Trials. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/inclusion-of-culturally-and-linguistically-diverse-populations-in-clinical-trials

The research use of online data/web 2.0 comments0

 

Does it require research ethics review and specified consent?

Dr Gary Allen
AHRECS Senior Consultant

The internet is a rich source of information for researchers. On the Web 2.0 we see extensive commentary on numerous life matters, which may be of interest to researchers in a wide range of (sub)disciplines. Research interest in these matters frequently prompts the following questions –Can I use that in my project? Hasn’t that already been published? Is research ethics review required? Is it necessary to obtain express consent for the research use?

It’s important to recognise that these questions aren’t posed in isolation. Cases like the OkCupid data scraping scandal, the Ashley Madison hack, Emotional Contagion, Cambridge Analytica and others provide a disturbing context.  At a time when the use of the internet and social media is startingly high (Nielsen 2019, Australian Bureau of Statistics 2018, commentaries such as the WebAlive blog 2019), there is also significant distrust of the platforms people are using. Consequently, there are good reasons for researchers and research ethics reviewers to be cautious about use of existing material for research, even if the terms and conditions of a site/platform specifically discuss research.

Like many ethics questions, there isn’t a single simple answer that is correct all the time.  The use of some kinds of data for research may not meet the National Statement’s definition of human research. Use of other kinds of data may meet that definition but will be exempt from review and so not require explicit consent. Use of other kinds of data or other uses of data that involves no more than low risk can be reviewed outside an HREC meeting and others will actually have to be considered at an HREC meeting.

AHRECS proposes a three-part test, which can be applied to individual projects to test whether a proposed use of internet data is human research and needs ethics review and this will also guide whether explicit and project-specific consent is required. If this test is formally adopted by an institution and by its research ethics committees, it would provide a transparent, consistent, and predictable way to judge these matters.

You can find a word copy of the questions, as well as a png and pdf copy of the flow diagram in our subscribers’ area.
.

For institutions
https://ahrecs.vip/flow…
$350/year
.

For individuals
https://www.patreon.com/posts/flow…
USD10/month
.

 

For any questions email enquiry@ahrecs.com

Part One of this test is whether the content of a site or platform is publicly available. One component of this test is whether the researcher will be using scraping, spoofing or hacking of the site/platform to obtain information.
.

Part Two of the test relates to whether individuals have consented and will be reasonably identifiable from the data and its proposed research use and whether there are risks to those individuals.  A component of this test is exploring whether an exemption from the consent requirement is necessary (i.e. as provided for by paragraphs 2.3.9 -12 of the National Statement and are lawful under any privacy regulation that applies).

Part Three of the test relates to how the proposed project relates to the national human research ethics guidelines – the National Statement – and whether there are any matters that must be considered by a human research ethics committee.  For example, Section 3 of the National Statement (2007 updated 2018) discusses some methodological matters and Section 4 some potential participant issues that must be considered by an HREC.

Individually, any one of these parts could determine that review and consent is required. But meeting all three parts of the test is necessary to indicate that the work is exempt before a project can be exempted from review.

Even if the tests indicate review/consent is required, that doesn’t mean the research is ethically problematic, just a project requires for more due consideration.

The implication of this is that not all research based upon online comments or social media posts can be exempted from review but, conversely, not all such work must be ethically reviewed.  The approach that should be taken depends upon project-specific design matters.  A strong and justifiable institutional process will have nuanced criteria on these matters.  Failing to establish transparent and predictable policies would be a serious lapse in an important area of research.

Booklet 37 of the Griffith University Research Ethics Manual now incorporates this three-part test.

In the subscribers’ area you will find a suggested question set for the three-part test, as well as a graphic overview of the work flow for the questions.

It is recommended institutions adopt their own version of the test, including policy positions with regard to the use of hacked or scraped data, or the research use of material in a manner at odds with a site/platform’s rules.

References

Australian agency to probe Facebook after shocking revelation – The New Daily. Accessed 16/11/19 from https://thenewdaily.com.au/news/world/2018/04/05/facebook-data-leak-australia/

Australian Bureau of Statistics (2018) 8153.0 – Internet Activity, Australia, June 2018. Retrieved from https://www.abs.gov.au/ausstats/abs@.nsf/mf/8153.0/ (accessed 27 September 2019)

Chamber, C. (2014 01 July) Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach? The Guardian. Accessed 16/11/19 from http://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach

Griffith University (Updated 2019) Griffith University Research Ethics Manual (GUREM). Accessed 16/11/19 from https://www.griffith.edu.au/research/research-services/research-ethics-integrity/human/gurem

McCook, A. (2016 16 May) Publicly available data on thousands of OKCupid users pulled over copyright claim.  Retraction Watch. Accessed 16/11/19 from http://retractionwatch.com/2016/05/16/publicly-available-data-on-thousands-of-okcupid-users-pulled-over-copyright-claim/

Nielsen (2019, 26 July) TOTAL CONSUMER REPORT 2019: Navigating the trust economy in CPG. Retrieved from https://www.nielsen.com/us/en/insights/report/2019/total-consumer-report-2019/ (accessed 27 September 2019)

NHMRC (2007 updated 2018) National Statement on Ethical Conduct in Human Research. Accessed 17/11/19 from https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018

Satran, J. (2015 02 September) Ashley Madison Hack Creates Ethical Conundrum For Researchers. Huffington Post. Accessed 16/11/19 from http://www.huffingtonpost.com.au/entry/ashley-madison-hack-creates-ethical-conundrum-for-researchers_55e4ac43e4b0b7a96339dfe9?section=australia&adsSiteOverride=au

WebAlive (2019 24 June) The State of Australia’s Ecommerce in 2019 Retrieved from https://www.webalive.com.au/ecommerce-statistics-australia/ (accessed 27 September 2019).

Recommendations for further reading

Editorial (2018 12 March) Cambridge Analytica controversy must spur researchers to update data ethics. Nature. Accessed 16/11/19 from https://www.nature.com/articles/d41586-018-03856-4?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20180329

Neuroskeptic (2018 14 July) The Ethics of Research on Leaked Data: Ashley Madison. Discover. Accessed 16/11/19 from http://blogs.discovermagazine.com/neuroskeptic/2018/07/14/ethics-research-leaked-ashley-madison/#.Xc97NC1L0RU

Newman, L. (2017 3 July) WikiLeaks Just Dumped a Mega-Trove of CIA Hacking Secrets. Wired Magazine. Accessed 16/11/19 from https://www.wired.com/2017/03/wikileaks-cia-hacks-dump/

Weaver, M (2018 25 April) Cambridge University rejected Facebook study over ‘deceptive’ privacy standards. TheGuardian. Accessed 16/11/19 from https://www.theguardian.com/technology/2018/apr/24/cambridge-university-rejected-facebook-study-over-deceptive-privacy-standards

Woodfield, K (ed.) (2017) The Ethics of Online Research. Emerald Publishing. https://doi.org/10.1108/S2398-601820180000002004

Zhang, S. (2016 20 May ) Scientists are just as confused about the ethics of big-data research as you. Wired Magazine. Accessed 16/011/19 from http://www.wired.com/2016/05/scientists-just-confused-ethics-big-data-research/

Competing interests

Gary is the principal author of the Griffith University Research Ethics Manual (GUREM) and receives a proportion of license sales.

This post may be cited as:
Allen, G. (23 November 2019) The research use of online data/web 2.0 comments. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-research-use-of-online-data-web-2-0-comments

0