ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Filter by Keywords
Research ethics committees
Research integrity
From
To
Authors

Resource Library

Research Ethics MonthlyAbout Us

Governance

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The Ethics and Politics of Qualitative Data Sharing0

 

Mark Israel (AHRECS and Murdoch University) and Farida Fozdar (The University of Western Australia).

There is considerable momentum behind the argument that public data is a national asset and should be made more easily available for research purposes. In introducing the Data Sharing and Release Legislative Reforms Discussion Paper in September 2019, the Australian Commonwealth Minister for Government Services argued that proposed changes to data use in the public sector would mean that

Australia’s research sector will be able to use public data to improve the development of solutions to public problems and to test which programs are delivering as intended—and which ones are not.

Data reuse is seen as a cost-efficient use of public funds, reducing the burden on participants and communities. And, the argument is not restricted to government.  Journals, universities and funding agencies are increasingly requiring social scientists to make their data available to other researchers, and even to the public, in the interests of scientific inquiry, accountability, innovation and progress. For example, the Research Councils United Kingdom (RCUK) takes the benefits associated with data sharing for granted

Publicly-funded research data are a public good, produced in the public interest; Publicly-funded research data should be openly available to the maximum extent possible.

In Australia, both the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC) have adopted open access policies that apply to research funded by those councils. While the ARC policy only refers to research outputs and excludes research data and research data outputs, the NHMRC strongly encourages open access to research data.

And yet, several social researchers have argued that data sharing requirements, developed in the context of medical research using quantitative data, may be inappropriate for qualitative research. Their arguments rest on a mix of ethical, practical and legal grounds.

In an article entitled ‘Whose Data Are They Anyway?’, Parry and Mauthner (2004) recognised unique issues associated with archiving qualitative data. The main considerations are around confidentiality (is it possible to anonymise the data by changing the details without losing validity) and informed consent (can participants know and consent to all potential future uses of their data at a single point in time?, and alternatively what extra burden do repeated requests for consent place on participants?).

There is also the more philosophical issue of the reconfiguration of the relationship between researchers and participants including moral responsibilities and commitments, potential violations of trust, and the risk of data misrepresentation. There are deeper epistemological issues, including the joint construction of qualitative data, and the reflexivity involved in preparing data for secondary analysis. As a result, Mauthner (2016) critiqued ‘regulation creep’ whereby regulators in the United Kingdom have made data sharing a moral responsibility associated with ethical research, when in fact it may be more ethical not to share data.

In addition, there is a growing movement to recognise the rights of some communities to control their own data. Based on the fundamental principle of self-determination, some Indigenous peoples have claimed sovereignty over their own data: ‘The concept of data sovereignty, … is linked with indigenous peoples’ right to maintain, control, protect and develop their cultural heritage, traditional knowledge and traditional cultural expressions, as well as their right to maintain, control, protect and develop their intellectual property over these.’ (Tauli-Corpuz, in Kukutai and Taylor, 2016:xxii). The goal is that its use should enhance self-determination and development.

To be fair to both the Commonwealth Minister and the RCUK, each recognises that data sharing should only occur prudently and safely and acknowledges that the benefits of sharing need to be balanced against rights to privacy (the balance proposed for earlier Australian legislative proposals have already been subjected to academic critique). The challenge is to ensure that our understanding of how these competing claims should be assessed is informed by an understanding of the nature of qualitative as well as quantitative data, of how data might be co-constructed or owned, of the cultural sensitivity that might be required to interpret and present it, and the damage that might be done as a result of misuse or  misrepresentation.

Acknowledgements
This article draws on material drafted for Fozdar and Israel (under review).
.

References:

Fozdar, F. and Israel, M. (under review) Sociological ethics. In Mackay, D. and Iltis, A. (eds) The Oxford Handbook of Research Ethics. Oxford: Oxford University Press.

Kukutai, T. and Taylor, J. (Eds.) (2016) Indigenous data sovereignty: Toward an agenda (Vol. 38). Canberra: ANU Press.

Mauthner, N.S. (2016) Should data sharing be regulated? In van den Hoonard, W. and Hamilton, A. (eds) The Ethics Rupture: Exploring alternatives to formal research-ethics review. University of Toronto Press. pp.206-229.

Parry, O. and Mauthner, N.S. (2004) Whose data are they anyway? Practical, legal and ethical issues in archiving qualitative research data. Sociology, 38(1), 139-152.

This post may be cited as:
Israel, M. & Fozdar, F. (5 February 2020) The Ethics and Politics of Qualitative Data Sharing. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-ethics-and-politics-of-qualitative-data-sharing

Conversations with an HREC: A Researcher’s perspective0

 

Dr Ann-Maree Vallence and Dr Hakuei Fujiyama
College of Science, Health, Engineering and Education, Murdoch University, Perth, Australia
http://profiles.murdoch.edu.au/myprofile/ann-maree-vallence/
http://profiles.murdoch.edu.au/myprofile/hakuei-fujiyama/

In our careers to date, we have had many formal conversations with members of HRECs across different institutions regarding human research ethics applications and amendments. We have also had many informal conversations with members of HRECs regarding standard operating procedures in the labs we have worked in. In this article, we share our experience engaging with our HREC in a different context, specifically, formal negotiations with our HREC following an adverse incident that occurred during our data collection for one of our projects.

To provide some context, our research often uses non-invasive brain stimulation techniques including transcranial magnetic stimulation (TMS). TMS has been commonly used in research since the mid-1980s, and is considered safe, non-invasive, and painless. TMS involves a brief, high-current electrical pulse delivered through a handheld coil placed over the scalp, which induces a magnetic field that passes through the scalp and skull with little attenuation. The magnetic field induces current flow in the underlying brain tissue, and if the stimulation is sufficiently intense, it will activate the underlying brain cells providing a measure of brain excitability [1, 2]. There are published international guidelines for the safe use of TMS [3, 4] that are used to design the experiments and screen for contraindications to TMS (for example it is routine to exclude any persons who have a history of epilepsy, metal implants in the skull, or cardiac pacemakers). Nonetheless, research using TMS involves a small but finite risk. Occasionally, research participants experience a mild and temporary headache, nausea, muscular problems, dizziness, or fainting during or after TMS.

In a 12-month period in 2017, we experienced three adverse incidents: three participants in our research projects using TMS fainted#. As mentioned above, TMS studies involve a small but known risk of fainting. There have been some reports of syncope in the literature [5-7]. It is proposed that anxiety and exposure to a novel stimulus are likely responsible for fainting in the context of TMS [3, 5-7], however it is not possible to determine whether fainting or syncope is a secondary effect of an emotional response or a direct effect of the TMS on the nervous system.

It was following the reporting of these adverse events that we found ourselves in formal conversations with our HREC as well as informal interactions with several members of the HREC. There were two key steps involved in these conversations worth outlining. First, we invited the members of the HREC to visit the lab and attend a lab meeting in which we were discussing the adverse events. This engagement with the members of the HREC in our lab environment was a mutually beneficial exercise: it helped researchers to fully understand the concerns of the HREC and helped the members of the HREC to better understand our research procedures and aims, and observe our commitment to minimizing the risks associated with our research.

Second, we scrutinised our standard operating procedures to determine what changes we could make to minimize the risk of another adverse event. As outlined above, fainting during a TMS experiment is highly likely to be related to a psycho-physical response, although we cannot rule out the possibility that it is due to a direct effect of TMS on the nervous system. Following the adverse incidents, we have made several changes to our procedures. First, and perhaps most importantly, we send our potential participants a short video so they can see a typical experiment before they enter the lab. Second, when participants come into the lab we ask them if they have had any substantial change to their routine (for example sleep pattern, medication) feel stressed by factors independent of the research, and if they have had food and water in the preceding few hours (we have snacks and water in the lab if participants haven’t eaten). Third, we made changes to our lab setting such as moving to a modern, clinical testing room which was larger and brighter than the old testing room. Fourth, we take time to explain all of the equipment in the lab, not just the equipment being used in that particular experimental session.

Since the implementation of the changes to our standard operating procedures, we have not experienced an adverse event. The entire process of conversing both formally and informally with the HREC has led to improved written communication of our research to potential participants and HREC in the form of new project applications. Additionally, the process led to the development of resources for members of the lab, such as evolving standard operating procedures and a formal (compulsory) lab induction, and resources for potential participants, such as the communication of study information via a combination of written, video, and photo formats. Importantly, the implemention of revised procedures not only improved the safety profile of our experiments, but also it brought us in a better position to conduct high-quality research by enriching our resources in training lab members, communications with participants, and experience in engaging with HRECs. So, what did we learn from our conversation with an HREC? The processes of conversing with the HREC in the context of an adverse event is beneficial and needn’t wait for an adverse event to occur!

#In a 12-month period in 2017”, note that these are the only fainting incidents that we experienced since we have started our role at MU in 2015

References:

1.         Barker AT, Jalinous R and Freeston IL, Non-invasive magnetic stimulation of human motor cortex. Lancet, 1985. 1(8437): p. 1106-7.

2.         Hallett M, Transcranial magnetic stimulation: a primer. Neuron, 2007. 55(2): p. 187-99.

3.         Rossi S, Hallett M, Rossini PM and Pascual-Leone A, Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and research. Clin Neurophysiol, 2009. 120(12): p. 2008-39.

4.         Rossi S, Hallett M, Rossini PM and Pascual-Leone A, Screening questionnaire before TMS: An update. Clinical Neurophysiology, 2011. 122(8): p. 1686-1686.

5.         Kirton A, Deveber G, Gunraj C and Chen R, Neurocardiogenic syncope complicating pediatric transcranial magnetic stimulation. Pediatr Neurol, 2008. 39(3): p. 196-7.

6.         Kesar TM, McDonald HS, Eicholtz SP and Borich MR, Case report of syncope during a single pulse transcranial magnetic stimulation experiment in a healthy adult participant. Brain stimulation, 2016. 9(3): p. 471.

7.         Gillick BT, Rich T, Chen M and Meekins GD, Case report of vasovagal syncope associated with single pulse transcranial magnetic stimulation in a healthy adult participant. BMC neurology, 2015. 15(1): p. 248.

Dr Yvonne Haigh
Chair, HREC, Murdoch University. Perth Western Australian

.
In 2015, Murdoch University’s HREC received increasing numbers of applications that covered innovative approaches to cognitive neuroscience with a specific focus on TMS (Transcranial Magnetic Stimulation). The topic area covered was very new with significant levels of technical and neuroscience language. While the methods of data collection were relatively unfamiliar for the committee members, several members did undertake some broad reading in order to establish greater familiarity and understanding. However, the applications did refer to different forms of TMS which further exacerbated the committee’s hesitation. In order to establish good rapport between the researchers and the committee, we invited the researchers to present on the topic – TMS. The aim of the presenting was to provide an overview of the variations of the technology, any side effects, international benchmarks and so forth. The committee was certainly reassured with the researchers’ level of experience and expertise. Moreover, it was also apparent the researchers had a sound approach to safety and participants’ wellbeing.
.

However, over the ensuing years a range of adverse incidents occurred which involved dizzy spells and fainting in a few cases. The researchers informed the committee and put in place a range of options. The committee was invited to the laboratory to observe and experience the methods. This was particularly helpful and reassuring for the members who attended and enabled a broader discussion with those committee members who could not attend the laboratory. The Manager, Research Ethics & Integrity was also invited to attend a laboratory team meeting where the incidents were discussed, safety procedures revised, and student researchers reminded of their roles and obligations. This meeting enabled a confident report back to the HREC which was aligned with the adverse incident reports and made the committee’s task of reviewing the incidents significantly clearer.
.

These conversations and visits resulted in updated procedures (including safety) from the research leaders. This has led to clearer exclusion criteria and additional questions incorporated into the consent process to ensure any known risks are minimised. While adverse incidents are difficult, the outcome in this instance has led to building increased trust between the committee and the research team and a proactive approach from both sides to ensure that new emerging issues are discussed and resolved.
.

One of the very clear outcomes of this process has been an increased level of quality in these ethics applications which take less committee time and effort to approve.  While the technology is always evolving, and research in the area is ‘cutting edge’, the possibility that this research may change the lives of participants in these projects is evident in the researchers’ applications. From the committee’s perspective, it has been the open and respectful communication between all parties that has generated both a solid working relationship and enabled high level ethical research. The HREC’s response to a more recent ethics application reviewed since the adverse incidents described begins with the words: “The committee were impressed by the quality of this application and the careful attention to detail. The committee thank the researchers for their ongoing efforts to incorporate suggestions and advice in the collaborative effort to attain ethically strong research and positive outcomes for the community”.

This post may be cited as:
Vallence. A. and Fujiyama, H. (4 February 2020) Conversations with an HREC: A Researcher’s perspective. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/conversations-with-an-hrec-a-researchers-perspective

The research use of online data/web 2.0 comments0

 

Does it require research ethics review and specified consent?

Dr Gary Allen
AHRECS Senior Consultant

The internet is a rich source of information for researchers. On the Web 2.0 we see extensive commentary on numerous life matters, which may be of interest to researchers in a wide range of (sub)disciplines. Research interest in these matters frequently prompts the following questions –Can I use that in my project? Hasn’t that already been published? Is research ethics review required? Is it necessary to obtain express consent for the research use?

It’s important to recognise that these questions aren’t posed in isolation. Cases like the OkCupid data scraping scandal, the Ashley Madison hack, Emotional Contagion, Cambridge Analytica and others provide a disturbing context.  At a time when the use of the internet and social media is startingly high (Nielsen 2019, Australian Bureau of Statistics 2018, commentaries such as the WebAlive blog 2019), there is also significant distrust of the platforms people are using. Consequently, there are good reasons for researchers and research ethics reviewers to be cautious about use of existing material for research, even if the terms and conditions of a site/platform specifically discuss research.

Like many ethics questions, there isn’t a single simple answer that is correct all the time.  The use of some kinds of data for research may not meet the National Statement’s definition of human research. Use of other kinds of data may meet that definition but will be exempt from review and so not require explicit consent. Use of other kinds of data or other uses of data that involves no more than low risk can be reviewed outside an HREC meeting and others will actually have to be considered at an HREC meeting.

AHRECS proposes a three-part test, which can be applied to individual projects to test whether a proposed use of internet data is human research and needs ethics review and this will also guide whether explicit and project-specific consent is required. If this test is formally adopted by an institution and by its research ethics committees, it would provide a transparent, consistent, and predictable way to judge these matters.

You can find a word copy of the questions, as well as a png and pdf copy of the flow diagram in our subscribers’ area.
.

For institutions
https://ahrecs.vip/flow…
$350/year
.

For individuals
https://www.patreon.com/posts/flow…
USD10/month
.

 

For any questions email enquiry@ahrecs.com

Part One of this test is whether the content of a site or platform is publicly available. One component of this test is whether the researcher will be using scraping, spoofing or hacking of the site/platform to obtain information.
.

Part Two of the test relates to whether individuals have consented and will be reasonably identifiable from the data and its proposed research use and whether there are risks to those individuals.  A component of this test is exploring whether an exemption from the consent requirement is necessary (i.e. as provided for by paragraphs 2.3.9 -12 of the National Statement and are lawful under any privacy regulation that applies).

Part Three of the test relates to how the proposed project relates to the national human research ethics guidelines – the National Statement – and whether there are any matters that must be considered by a human research ethics committee.  For example, Section 3 of the National Statement (2007 updated 2018) discusses some methodological matters and Section 4 some potential participant issues that must be considered by an HREC.

Individually, any one of these parts could determine that review and consent is required. But meeting all three parts of the test is necessary to indicate that the work is exempt before a project can be exempted from review.

Even if the tests indicate review/consent is required, that doesn’t mean the research is ethically problematic, just a project requires for more due consideration.

The implication of this is that not all research based upon online comments or social media posts can be exempted from review but, conversely, not all such work must be ethically reviewed.  The approach that should be taken depends upon project-specific design matters.  A strong and justifiable institutional process will have nuanced criteria on these matters.  Failing to establish transparent and predictable policies would be a serious lapse in an important area of research.

Booklet 37 of the Griffith University Research Ethics Manual now incorporates this three-part test.

In the subscribers’ area you will find a suggested question set for the three-part test, as well as a graphic overview of the work flow for the questions.

It is recommended institutions adopt their own version of the test, including policy positions with regard to the use of hacked or scraped data, or the research use of material in a manner at odds with a site/platform’s rules.

References

Australian agency to probe Facebook after shocking revelation – The New Daily. Accessed 16/11/19 from https://thenewdaily.com.au/news/world/2018/04/05/facebook-data-leak-australia/

Australian Bureau of Statistics (2018) 8153.0 – Internet Activity, Australia, June 2018. Retrieved from https://www.abs.gov.au/ausstats/abs@.nsf/mf/8153.0/ (accessed 27 September 2019)

Chamber, C. (2014 01 July) Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach? The Guardian. Accessed 16/11/19 from http://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach

Griffith University (Updated 2019) Griffith University Research Ethics Manual (GUREM). Accessed 16/11/19 from https://www.griffith.edu.au/research/research-services/research-ethics-integrity/human/gurem

McCook, A. (2016 16 May) Publicly available data on thousands of OKCupid users pulled over copyright claim.  Retraction Watch. Accessed 16/11/19 from http://retractionwatch.com/2016/05/16/publicly-available-data-on-thousands-of-okcupid-users-pulled-over-copyright-claim/

Nielsen (2019, 26 July) TOTAL CONSUMER REPORT 2019: Navigating the trust economy in CPG. Retrieved from https://www.nielsen.com/us/en/insights/report/2019/total-consumer-report-2019/ (accessed 27 September 2019)

NHMRC (2007 updated 2018) National Statement on Ethical Conduct in Human Research. Accessed 17/11/19 from https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018

Satran, J. (2015 02 September) Ashley Madison Hack Creates Ethical Conundrum For Researchers. Huffington Post. Accessed 16/11/19 from http://www.huffingtonpost.com.au/entry/ashley-madison-hack-creates-ethical-conundrum-for-researchers_55e4ac43e4b0b7a96339dfe9?section=australia&adsSiteOverride=au

WebAlive (2019 24 June) The State of Australia’s Ecommerce in 2019 Retrieved from https://www.webalive.com.au/ecommerce-statistics-australia/ (accessed 27 September 2019).

Recommendations for further reading

Editorial (2018 12 March) Cambridge Analytica controversy must spur researchers to update data ethics. Nature. Accessed 16/11/19 from https://www.nature.com/articles/d41586-018-03856-4?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20180329

Neuroskeptic (2018 14 July) The Ethics of Research on Leaked Data: Ashley Madison. Discover. Accessed 16/11/19 from http://blogs.discovermagazine.com/neuroskeptic/2018/07/14/ethics-research-leaked-ashley-madison/#.Xc97NC1L0RU

Newman, L. (2017 3 July) WikiLeaks Just Dumped a Mega-Trove of CIA Hacking Secrets. Wired Magazine. Accessed 16/11/19 from https://www.wired.com/2017/03/wikileaks-cia-hacks-dump/

Weaver, M (2018 25 April) Cambridge University rejected Facebook study over ‘deceptive’ privacy standards. TheGuardian. Accessed 16/11/19 from https://www.theguardian.com/technology/2018/apr/24/cambridge-university-rejected-facebook-study-over-deceptive-privacy-standards

Woodfield, K (ed.) (2017) The Ethics of Online Research. Emerald Publishing. https://doi.org/10.1108/S2398-601820180000002004

Zhang, S. (2016 20 May ) Scientists are just as confused about the ethics of big-data research as you. Wired Magazine. Accessed 16/011/19 from http://www.wired.com/2016/05/scientists-just-confused-ethics-big-data-research/

Competing interests

Gary is the principal author of the Griffith University Research Ethics Manual (GUREM) and receives a proportion of license sales.

This post may be cited as:
Allen, G. (23 November 2019) The research use of online data/web 2.0 comments. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-research-use-of-online-data-web-2-0-comments

Smarter proportional research ethics review0

 

Rushing toward a faster review decision should not mean relaxing standards or playing chicken with stricter central control

Gary Allen, Mark Israel and Colin Thomson

Too often, there is a danger that ‘expedited ethical review’ (a term not used in the National Statement since 1999) might equate to an approach that abridges the review process to the point where it’s little more than a friendly exchange between peers or a nod to seniority. We won’t call out the well-reported cases where it is hard to fathom how they were granted ethics approval. Such cases should make us uncomfortable, because they are invitations to replace institutional self-regulation with something hasher and unsympathetic.

Don’t get us wrong, we’ve spoken often and enthusiastically about the value of well-designed proportional review arrangements. We have assisted many clients, large and small, to design and implement such arrangements and believe that they form part of a well-conceived review system.

A proportional review arrangement can deliver a review outcome much faster than consideration by a human research ethics committee, but instead of a ‘Claytons’ or mock-review, it should have the following features:

  1. While there can, and should, be a mechanism to do an automated quick self-assessment of whether a proposed project qualifies for ethics review other than by a research ethics committee, the process should:
    1. not rely on questions along the lines of “Is this a low risk research project?”
    2. draw on, reference and link to guidance material.
    3. when using trigger questions, ensure they are nuanced, with probing sub-questions.
    4. include confirmation of a quick assessment by an experienced ethics officer or chairperson.
    5. retain an applicant’s responses, both as a record of what they said about the project, and for future evaluation of whether the arrangement is correctly assessing new projects and guiding applications along the correct review pathway.
  2. The process should preferably be online, easily (re)configurable, easily auditable, with information entered by applicants and ‘triaged’ by an ethics officer.
  3. A quality online system will populate committee papers and reports, will issue reminders and will populate with known information.
  4. While many projects may be reviewed outside of the human research ethics committee, the reviews should be conducted by experienced persons, who participate in annual professional development and who can draw upon internal and external policy and resource material.

In Australia, an institution’s proportional review arrangements might include the following pathways:

  1. Prior review– Research that has already been reviewed by another HREC, appropriately delegated review body, or an international body equivalent to an Australian research ethics review body.
  2. Scope checker– A test to confirm whether a proposed project is in fact human research.
  3. Exemption test– A test to determine whether the proposed research is a type an institution could exempt from ethics review as per the National Statement.
  4. HREC review required test– A test to confirm whether the research project is of a type the National Statement specifies must be reviewed by a HREC.
  5. Institutional exemption test– Many institutionsexempt some categories of human research from research ethics review (e.g. universities often exempt course evaluations and practical activities for a teaching-learning purpose).
  6. Negligible risk research– Subject to qualifying criteria an institution might establish a negligible risk review pathway in which applications are considered administratively.
  7. Low-risk, and minimal ethical issue research– Subject to qualifying criteria, proposed projects that are low risk and have minimal ethical sensitivity could be reviewed by the chair of the research ethics committee.
  8. Low-risk, some ethical issue research– Again subject to qualifying criteria, proposed projects that are low risk but have some ethical sensitivity could be reviewed by a small panel of the research ethics committee (including external member of the committee).
  9. HREC review – Only human research (see 2), that has not previously been reviewed (see 1) that is not exempt (see 3 and 4) and has not been classified as negligible risk (see 6) or low risk (see 7 and 8) needs to be reviewed by HREC.

An arrangement with the features listed above would allow for review that is proportional, timely, efficient and justifiable. Reviews that are merely expedited or fast places us all at risk. The increasing examples of “how could that have been approved?” makes it feel as though some institutions are gambling that a desire to meet researchers’ calls for quick, if superficial, review won’t be exposed by unethical practice. Perhaps they are correct, but every new reported review misstep makes us more nervous. Realistically, establishing a nationally administered reliable, robust and agile proportional review process requires substantial investment of time and other resources so is unlikely to happen.  But, what poor review processes could do is invite far more detailed direction on how institutions can design, conduct and monitor processes outside of a HREC. In our experience, there are greater and longer-lasting benefits that can accrue from an institution having a high quality approach to proportional review.

The above is a summary of the discussion we typically include in blueprint documents about establishing a robust proportional review arrangement. We have included some further notes on this topic on our https://www.ahrecs.vip and Patreon pages.

Please contact us at proportional@ahrecs.com if you would like to discuss how we might assist your institution.

This post may be cited as:
Allen, G., Israel, M. & Thomson, C. (26 August 2019) Smarter proportional research ethics review.  Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/smarter-proportional-research-ethics-review

0