ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

HREC

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The research use of online data/web 2.0 comments0

 

Does it require research ethics review and specified consent?

Dr Gary Allen
AHRECS Senior Consultant

The internet is a rich source of information for researchers. On the Web 2.0 we see extensive commentary on numerous life matters, which may be of interest to researchers in a wide range of (sub)disciplines. Research interest in these matters frequently prompts the following questions –Can I use that in my project? Hasn’t that already been published? Is research ethics review required? Is it necessary to obtain express consent for the research use?

It’s important to recognise that these questions aren’t posed in isolation. Cases like the OkCupid data scraping scandal, the Ashley Madison hack, Emotional Contagion, Cambridge Analytica and others provide a disturbing context.  At a time when the use of the internet and social media is startingly high (Nielsen 2019, Australian Bureau of Statistics 2018, commentaries such as the WebAlive blog 2019), there is also significant distrust of the platforms people are using. Consequently, there are good reasons for researchers and research ethics reviewers to be cautious about use of existing material for research, even if the terms and conditions of a site/platform specifically discuss research.

Like many ethics questions, there isn’t a single simple answer that is correct all the time.  The use of some kinds of data for research may not meet the National Statement’s definition of human research. Use of other kinds of data may meet that definition but will be exempt from review and so not require explicit consent. Use of other kinds of data or other uses of data that involves no more than low risk can be reviewed outside an HREC meeting and others will actually have to be considered at an HREC meeting.

AHRECS proposes a three-part test, which can be applied to individual projects to test whether a proposed use of internet data is human research and needs ethics review and this will also guide whether explicit and project-specific consent is required. If this test is formally adopted by an institution and by its research ethics committees, it would provide a transparent, consistent, and predictable way to judge these matters.

You can find a word copy of the questions, as well as a png and pdf copy of the flow diagram in our subscribers’ area.
.

For institutions
https://ahrecs.vip/flow…
$350/year
.

For individuals
https://www.patreon.com/posts/flow…
USD10/month
.

 

For any questions email enquiry@ahrecs.com

Part One of this test is whether the content of a site or platform is publicly available. One component of this test is whether the researcher will be using scraping, spoofing or hacking of the site/platform to obtain information.
.

Part Two of the test relates to whether individuals have consented and will be reasonably identifiable from the data and its proposed research use and whether there are risks to those individuals.  A component of this test is exploring whether an exemption from the consent requirement is necessary (i.e. as provided for by paragraphs 2.3.9 -12 of the National Statement and are lawful under any privacy regulation that applies).

Part Three of the test relates to how the proposed project relates to the national human research ethics guidelines – the National Statement – and whether there are any matters that must be considered by a human research ethics committee.  For example, Section 3 of the National Statement (2007 updated 2018) discusses some methodological matters and Section 4 some potential participant issues that must be considered by an HREC.

Individually, any one of these parts could determine that review and consent is required. But meeting all three parts of the test is necessary to indicate that the work is exempt before a project can be exempted from review.

Even if the tests indicate review/consent is required, that doesn’t mean the research is ethically problematic, just a project requires for more due consideration.

The implication of this is that not all research based upon online comments or social media posts can be exempted from review but, conversely, not all such work must be ethically reviewed.  The approach that should be taken depends upon project-specific design matters.  A strong and justifiable institutional process will have nuanced criteria on these matters.  Failing to establish transparent and predictable policies would be a serious lapse in an important area of research.

Booklet 37 of the Griffith University Research Ethics Manual now incorporates this three-part test.

In the subscribers’ area you will find a suggested question set for the three-part test, as well as a graphic overview of the work flow for the questions.

It is recommended institutions adopt their own version of the test, including policy positions with regard to the use of hacked or scraped data, or the research use of material in a manner at odds with a site/platform’s rules.

References

Australian agency to probe Facebook after shocking revelation – The New Daily. Accessed 16/11/19 from https://thenewdaily.com.au/news/world/2018/04/05/facebook-data-leak-australia/

Australian Bureau of Statistics (2018) 8153.0 – Internet Activity, Australia, June 2018. Retrieved from https://www.abs.gov.au/ausstats/abs@.nsf/mf/8153.0/ (accessed 27 September 2019)

Chamber, C. (2014 01 July) Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach? The Guardian. Accessed 16/11/19 from http://www.theguardian.com/science/head-quarters/2014/jul/01/facebook-cornell-study-emotional-contagion-ethics-breach

Griffith University (Updated 2019) Griffith University Research Ethics Manual (GUREM). Accessed 16/11/19 from https://www.griffith.edu.au/research/research-services/research-ethics-integrity/human/gurem

McCook, A. (2016 16 May) Publicly available data on thousands of OKCupid users pulled over copyright claim.  Retraction Watch. Accessed 16/11/19 from http://retractionwatch.com/2016/05/16/publicly-available-data-on-thousands-of-okcupid-users-pulled-over-copyright-claim/

Nielsen (2019, 26 July) TOTAL CONSUMER REPORT 2019: Navigating the trust economy in CPG. Retrieved from https://www.nielsen.com/us/en/insights/report/2019/total-consumer-report-2019/ (accessed 27 September 2019)

NHMRC (2007 updated 2018) National Statement on Ethical Conduct in Human Research. Accessed 17/11/19 from https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018

Satran, J. (2015 02 September) Ashley Madison Hack Creates Ethical Conundrum For Researchers. Huffington Post. Accessed 16/11/19 from http://www.huffingtonpost.com.au/entry/ashley-madison-hack-creates-ethical-conundrum-for-researchers_55e4ac43e4b0b7a96339dfe9?section=australia&adsSiteOverride=au

WebAlive (2019 24 June) The State of Australia’s Ecommerce in 2019 Retrieved from https://www.webalive.com.au/ecommerce-statistics-australia/ (accessed 27 September 2019).

Recommendations for further reading

Editorial (2018 12 March) Cambridge Analytica controversy must spur researchers to update data ethics. Nature. Accessed 16/11/19 from https://www.nature.com/articles/d41586-018-03856-4?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20180329

Neuroskeptic (2018 14 July) The Ethics of Research on Leaked Data: Ashley Madison. Discover. Accessed 16/11/19 from http://blogs.discovermagazine.com/neuroskeptic/2018/07/14/ethics-research-leaked-ashley-madison/#.Xc97NC1L0RU

Newman, L. (2017 3 July) WikiLeaks Just Dumped a Mega-Trove of CIA Hacking Secrets. Wired Magazine. Accessed 16/11/19 from https://www.wired.com/2017/03/wikileaks-cia-hacks-dump/

Weaver, M (2018 25 April) Cambridge University rejected Facebook study over ‘deceptive’ privacy standards. TheGuardian. Accessed 16/11/19 from https://www.theguardian.com/technology/2018/apr/24/cambridge-university-rejected-facebook-study-over-deceptive-privacy-standards

Woodfield, K (ed.) (2017) The Ethics of Online Research. Emerald Publishing. https://doi.org/10.1108/S2398-601820180000002004

Zhang, S. (2016 20 May ) Scientists are just as confused about the ethics of big-data research as you. Wired Magazine. Accessed 16/011/19 from http://www.wired.com/2016/05/scientists-just-confused-ethics-big-data-research/

Competing interests

Gary is the principal author of the Griffith University Research Ethics Manual (GUREM) and receives a proportion of license sales.

This post may be cited as:
Allen, G. (23 November 2019) The research use of online data/web 2.0 comments. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/the-research-use-of-online-data-web-2-0-comments

Clergy service to HRECs: the useful paradox within secular governance of research involving human participants0

 

Aviva Kipen, Union for Progressive Judaism and Progressive Judaism Victoria.

In 2015, I earned a Doctor of Ministry Studies degree from the University of Divinity in Melbourne. The thesis, investigating how 13 Christian and Jewish clergy experienced HREC service in their pastoral care roles, arose from my own human research ethics committee and Victorian Biotechnologies Ethics Advisory Committee service and extensive interfaith work. I had been mentored into my service to the Monash University HREC by the Rev’d Dr Judy Redman, the then Victorian Uniting Church Outreach Ministries Coordinator. I found myself in the company of Anglican clergy and had succeeded Catholics – nuns and priests – Buddhist monks and also male rabbis who had served before me. Joining Judy, the serving female minister, made the gender issue less remarkable than it might otherwise have been, even in the late 1990s. The faith interchanges on succession raised my immediate curiosity that would later lead to the research question and the project on which this piece draws.

The then National Guidelines were clear: we clergy appointees were not there to push our own denominational barrows. Still, I became curious about what was really going on in the minds of others who served HRECs interchangeably from a range of faiths and traditions regardless of often-irreconcilable theologies in the ‘pastoral chairs’. My interfaith work meant I was confident that, in the event of content matters being beyond my own repertoire, I would have an extensive network from which to seek expert guidance if asked to do so. But HREC appointment provides an opportunity to serve far beyond the specifics of faith content occasionally referenced in research applications.

I became aware that the recruitment of ‘the pastor’ in other committees was not always simple. I had been spotted at a meeting about chaplaincy in women’s prisons! How had others been identified and invited to join committees? What constituted their self-understanding of the ministry service being gifted to the committees they served? Would my interviews disclose any kind of ‘evangelism by stealth’?  Did faiths or denominations target access to committees assessing large amounts politically/theologically/ethically sensitive, kinds of research?

I discovered no documents showing the means by which the Catholic Church became an early adopter of the opportunity to be represented, but clearly there were Catholic clergy leading the discussion in the early years. My research showed great diversity within the voices of the Christian ministers. Even within denominations, including between current serving Catholics, there was diversity of expression on ground-breaking issues. It became clear that the one participant who asserted his role as being to represent the Catholic position, was the exceptional Catholic voice. Other Catholics applied the provisions of the current National Statement informed by their own faith understanding, but with broad appreciation for other communities’ concerns.

Many clergy enjoyed the intellectual effort of meeting preparation and assessing applications, perhaps indicating a somewhat obsessive character trait. The rigor of disciplined meetings, the collegiality with co-assessors and committee colleagues was experienced by many as a valued counterweight to congregational demands. When appointed, some experienced a bit of resistance and some took a gentle ribbing. But as they became known and trusted on their merits and performance, tenures were frequently extended. There was some inference that if individuals had theologies unable to embrace the content or methodologies required in assessing projects, it would be unlikely that they would find their way onto committees. A few references to short tenures alluded to non-renewal of clergy who were not a good fit.

The diversity of appointments reflects the neighbourhoods/communities served by HRECs and is appropriately representative of our national diversity. One participant was from a highly conservative evangelical denomination. The interview triggered deeply thoughtful reaction about personal identity relative to the HREC work. I would later find out that the reflection resulted in some major theological grappling as a consequence of the conversation. Regardless of denomination, interviewees found themselves intrigued by the attention my investigation was bringing to HREC clergy/pastoral work, which had almost invariably been out of the faiths’ hierarchical spotlights. Most remained entirely grateful for the freedom to do the HREC work without such attention.

One pastor described choosing not to participate in a committee discussion because he was aware his personal knowledge was not sufficient. It was a frank admission. The example begs the question of how applications need to enable comprehension and how lay and other non-disciplinary experts are enabled in their roles. Others found solutions to specific matters of dogma by offering wordings that would provide enough cues to the faith’s adherents to ensure they were going to be able to make informed choices without imperilling projects. What emerged was that clergy were clear about their denominational obligations and the tension between them and the needs of others in the general community.

Given that the task of assessing applications and contributing to meetings is identical for all HREC members, how do clergy understand themselves alongside their colleagues (who may be harbouring strong religious views but are not required to disclose them and which need not be presumed) as contributors to the wellbeing of the research landscape? Several clergy described pastoral care for committee colleagues and secretariat staff, by virtue of regular contact with them. This was implicit and automatic pastoral work. Care for researchers and participants whom the HREC members will never meet, is also natural pastoral work and a clear driver for clergy in their appointments.

Serving HRECs also provides clergy with a window to unfolding knowledge, a forward-looking perspective, regular use of critical faculties not always appreciated in congregational work, intelligent company, confidential settings in which they can be full participants without any oversight from their hierarchies resulting in contributions that don’t need to follow predictable, dogmatic lines, and a chance to serve beyond the faith or denomination. Australia has encoded high standards for itself in the research domain. Participants in my research were clear that high ethical research standards fit congruently into their understanding of their ministry work and several specialise in HREC work as their ministry interest. Many of these have high-level academic qualifications and years of expertise, which are offered repeatedly to the Australian community through HREC service.

Rabbi Dr Aviva Kipen has held Monash University HREC appointments and served on the Victorian Bio-Ethics Advisory Committee. She returned to serve a second term on the Australian Health Ethics Committee of NHMRC in 2019 and has begun the current triennium for the Victorian DHHS HREC. All comments reflect material in the thesis Kipen, A. (2015) Serving God and The Commonwealth of Australia: The Ministry Experiences of Clergy in Victorian Human Research Ethics Committees. Melbourne: University of Divinity.

This post may be cited as:
Kipen, A. (3 November 2019) Clergy service to HRECs: the useful paradox within secular governance of research involving human participants. Retrieved from: https://ahrecs.com/human-research-ethics/clergy-service-to-hrecs-the-useful-paradox-within-secular-governance-of-research-involving-human-participants

Keywords
Clergy, religion, denomination, ministry, faith

Smarter proportional research ethics review0

 

Rushing toward a faster review decision should not mean relaxing standards or playing chicken with stricter central control

Gary Allen, Mark Israel and Colin Thomson

Too often, there is a danger that ‘expedited ethical review’ (a term not used in the National Statement since 1999) might equate to an approach that abridges the review process to the point where it’s little more than a friendly exchange between peers or a nod to seniority. We won’t call out the well-reported cases where it is hard to fathom how they were granted ethics approval. Such cases should make us uncomfortable, because they are invitations to replace institutional self-regulation with something hasher and unsympathetic.

Don’t get us wrong, we’ve spoken often and enthusiastically about the value of well-designed proportional review arrangements. We have assisted many clients, large and small, to design and implement such arrangements and believe that they form part of a well-conceived review system.

A proportional review arrangement can deliver a review outcome much faster than consideration by a human research ethics committee, but instead of a ‘Claytons’ or mock-review, it should have the following features:

  1. While there can, and should, be a mechanism to do an automated quick self-assessment of whether a proposed project qualifies for ethics review other than by a research ethics committee, the process should:
    1. not rely on questions along the lines of “Is this a low risk research project?”
    2. draw on, reference and link to guidance material.
    3. when using trigger questions, ensure they are nuanced, with probing sub-questions.
    4. include confirmation of a quick assessment by an experienced ethics officer or chairperson.
    5. retain an applicant’s responses, both as a record of what they said about the project, and for future evaluation of whether the arrangement is correctly assessing new projects and guiding applications along the correct review pathway.
  2. The process should preferably be online, easily (re)configurable, easily auditable, with information entered by applicants and ‘triaged’ by an ethics officer.
  3. A quality online system will populate committee papers and reports, will issue reminders and will populate with known information.
  4. While many projects may be reviewed outside of the human research ethics committee, the reviews should be conducted by experienced persons, who participate in annual professional development and who can draw upon internal and external policy and resource material.

In Australia, an institution’s proportional review arrangements might include the following pathways:

  1. Prior review– Research that has already been reviewed by another HREC, appropriately delegated review body, or an international body equivalent to an Australian research ethics review body.
  2. Scope checker– A test to confirm whether a proposed project is in fact human research.
  3. Exemption test– A test to determine whether the proposed research is a type an institution could exempt from ethics review as per the National Statement.
  4. HREC review required test– A test to confirm whether the research project is of a type the National Statement specifies must be reviewed by a HREC.
  5. Institutional exemption test– Many institutionsexempt some categories of human research from research ethics review (e.g. universities often exempt course evaluations and practical activities for a teaching-learning purpose).
  6. Negligible risk research– Subject to qualifying criteria an institution might establish a negligible risk review pathway in which applications are considered administratively.
  7. Low-risk, and minimal ethical issue research– Subject to qualifying criteria, proposed projects that are low risk and have minimal ethical sensitivity could be reviewed by the chair of the research ethics committee.
  8. Low-risk, some ethical issue research– Again subject to qualifying criteria, proposed projects that are low risk but have some ethical sensitivity could be reviewed by a small panel of the research ethics committee (including external member of the committee).
  9. HREC review – Only human research (see 2), that has not previously been reviewed (see 1) that is not exempt (see 3 and 4) and has not been classified as negligible risk (see 6) or low risk (see 7 and 8) needs to be reviewed by HREC.

An arrangement with the features listed above would allow for review that is proportional, timely, efficient and justifiable. Reviews that are merely expedited or fast places us all at risk. The increasing examples of “how could that have been approved?” makes it feel as though some institutions are gambling that a desire to meet researchers’ calls for quick, if superficial, review won’t be exposed by unethical practice. Perhaps they are correct, but every new reported review misstep makes us more nervous. Realistically, establishing a nationally administered reliable, robust and agile proportional review process requires substantial investment of time and other resources so is unlikely to happen.  But, what poor review processes could do is invite far more detailed direction on how institutions can design, conduct and monitor processes outside of a HREC. In our experience, there are greater and longer-lasting benefits that can accrue from an institution having a high quality approach to proportional review.

The above is a summary of the discussion we typically include in blueprint documents about establishing a robust proportional review arrangement. We have included some further notes on this topic on our https://www.ahrecs.vip and Patreon pages.

Please contact us at proportional@ahrecs.com if you would like to discuss how we might assist your institution.

This post may be cited as:
Allen, G., Israel, M. & Thomson, C. (26 August 2019) Smarter proportional research ethics review.  Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/smarter-proportional-research-ethics-review

Research Ethics Review as a Box-Ticking Exercise0

 

Associate Professor Angela Romano | Faculty Research Ethics Adviser, Creative Industries Faculty, Queensland University of Technology

 

My role as a university Research Ethics Advisor involves an interesting range of activities, although sadly there is less actual advising than I would like. As Faculty Research Ethics Advisor (FREA) for the Queensland University of Technology’s Creative Industries Faculty, I review ethics applications for a wide variety of projects, ranging from negligible risk to high risk; manage a team of six Research Ethics Advisors, who review applications for projects with negligible to low risk; conduct training workshops and drop-in sessions for researchers to seeking advice research ethics; and answer queries about multitudinous ethics-related issues.

In practice, however, most of my work relates to checking ethics applications that are submitted in order to ensure that they are ready for review, then overseeing the review process and completing the associated paperwork. Since I commenced the FREA’s role almost a year ago, I have tried to increase the number and depth of conversations with colleagues and research students about broader issues of ethics, rather than simply how to complete an application. I see the culture changing, but most discussion continues to be initiated by an onus to complete ethics applications and focuses on application requirements.

A long-held critique voiced by Western scholars about the review of human research ethics is that the process is excessively focussed on box ticking and bureaucratic compliance rather than meaningful deliberation about ethical issues (Floyd & Arthur, 2012; Johnsson et al., 2014; Schrag, 2011). Sociology and law professor Gresham Sykes forecast this problem more than 50 years ago when he noted: ‘There is the danger that an institutional review committee might become a mere rubber stamp, giving the appearance of a solution, rather than the substance, for a serious problem of growing complexity which requires continuing discussion’ (Sykes, 1967, p. 11).

Many contemporary research articles about human research ethics boards and review processes decry this so-called box ticking or rubber stamp mentality, but usually these articles discuss review boards or processes without considering the mindset of researchers themselves. As a FREA at a major Australian university, I see substantive numbers of researchers who would actually welcome a more rudimentary ‘tick and flick’ process, with short, simple forms that would promptly grant them a rubber stamp of institutional endorsement.

I have witnessed this attitude in many research teams in which research assistants, project managers or research students are given primary or sole responsibility for research ethics and the writing of ethics applications, with little to no input or oversight from team supervisors or leaders. Such conduct would not be tolerated in any other area of research activity. Those same research team leaders would never request their research assistant to write an application for a major research grant, ask their project manager to draft an article for a respected journal, or instruct one of their master’s or doctoral students to submit a report for Confirmation of Candidature or other major study milestone without a senior team member providing major input and checking the text prior to submission. Ethics applications are not directly attached to any KPIs, so these researchers simply do not see the writing of an ethics application as warranting the same level of attention.

At an institutional level, there is substantial variation among research leaders and administrators in their grasp of the principles of research ethics and their fondness for a box ticking approach. In my discussions with staff from different universities, I have heard numerous research leaders argue research ethics advisors and reviewers should ‘stick to ethics and stop providing feedback about methods’. The head of one research centre leader told me in all seriousness that ethics committees should not request amendments in an ethics application if their review processes reveal that ‘the project sucks’ as long as there were no ‘ethical problems’ such as risk of harm to participants.

One academic who held one of the most senior research leadership positions in his university was surprised when I explained to him that researchers could not simply state what methodology they were using , such as focus groups, then be given a checklist of the ethical risks that applied to that particular methodology. He told me that he had not realised the ethics committees needed to know details about the exact methods being used, nor had he previously considered that the risks relating to each element of the project might change according to numerous contextual factors, such as the topic being studied, the location of research, the nature of recruitment, and the age, education levels, employment and cultures of participants.

Such comments indicate a perspective about research ethics that is fundamentally at odds with the approach that is outlined in the National Statement on Ethical Conduct in Human Research(2018), which sets standards for human research in Australia. The National Statement is based on the premise that research ethics and methods are inextricably linked. Itdefines ‘merit and integrity’ as essential components of ethical research (Section 1). For a research project to have merit and integrity, it must be designed ‘using methods appropriate for achieving the aims of the proposal’; be conducted by researchers with ‘experience, qualifications and competence that are appropriate for the research’; and be supported by ‘facilities and resources appropriate for the research’ (Section 1.1). Section 3.1 outlines ethical issues in seven overlapping phases that occur in most human research, these being ‘Recruitment’, ‘Consent’, ‘Collection, Use and Management of Data and Information’, ‘Communication of Research Findings or Results to Participants’, ‘Dissemination of Research Outputs and Outcomes’ and ‘After the Project’.

It is hard to see how any research leader who is familiar with the National Statementcould define human research that ‘sucks’ or has manifest methodological problems as ‘ethical’, yet I have encountered this mindset surprisingly often. From my observation, scholars who believe that there is only a limited connection between research methods and ethics will also often express simplified notions about ethics assurance and demonstrate a fondness for ticking boxes and using cut-and-paste responses.

A number of scholars have argued that rather than rely on box ticking and a culture of enforcement through form filling, research institutions should build reflective practice about research integrity by developing resources and supporting professional development (Allen & Israel, 2018; Israel & Drenth, 2016). I agree with that perspective, but believe those researchers who favour a box ticking approach will have no impetus to change until their employers and funding institutions demonstrate that they value and reward a reflective approach to ethics in the same way that they show they value and reward successful grant applications, research publications or research student completions.

REFERENCES

Allen, G., & Israel, M. (2018). Moving Beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In R. Iphofen & M. Tolich (eds). The SAGE Handbook of Qualitative Research Ethics (pp. 276-289). London: Sage.

Floyd, A., & Arthur, L. (2012). Researching from within: External and internal ethical engagement. International Journal of Research & Method in Education, 35(2), 171-180. doi: 10.1080/1743727X.2012.670481

Israel, M., & Drenth, P. (2016). Research Integrity: Perspectives from Australia and Netherlands. Handbook of Academic Integrity, 789-808.

Johnsson, L., Eriksson, S., Helgesson, G., & Hansson, M. G. (2014). Making researchers moral: Why trustworthiness requires more than ethics guidelines and review. Research Ethics, 10(1), 29-46. doi: 10.1177/1747016113504778

National Statement on Ethical Conduct in Human Research 2007 (Updated 2018). The National Health and Medical Research Council, the Australian Research Council and Universities Australia. Commonwealth of Australia, Canberra.

Schrag, Z. (2011). The case against ethics review in the social sciences. Research Ethics, 7, 120-131. doi: 10.1177/174701611100700402

Sykes, G.M. (1967). Feeling our way: A report on a conference on ethical issues in the social sciences. American Behavioral Scientist, 10(10), 8-11.

This post may be cited as:
Romano, A. (22 June 2019) Research Ethics Review as a Box-Ticking Exercise Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/research-ethics-review-as-a-box-ticking-exercise

0