ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Vulnerable groups

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Disaster Research and its Ethical Review1

 

Disaster research ethics is a growing area of interest within the research ethics field. Given the lack of a universal definition of disasters, it should not be a surprise that disaster research ethics is defined in various ways. Early approaches focused on ethical issues in conducting research in the acute phase of disasters (O’Mathúna 2010). Given the similarities of some of the ethical issues, it came to include humanitarian crises and emergencies. A recent review combined mental health research in natural disasters, armed conflicts and the associated refugee and internally displaced persons (IDP) settings (Chiumento et al. 2017). Each of these settings raises distinct ethical issues, as well as practical challenges for those ethically reviewing disaster research. The 2016 revision of the Council for International Organizations of Medical Sciences (CIOMS) research ethics guidelines included a section on disaster research (https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf). This blog will highlight a few of the practical challenges and note some efforts to respond to these.

One issue is how some disasters happen suddenly, while research ethics review takes time. The 2016 CIOMS guidelines call for innovative approaches to research ethics review, including ways to pre-assess protocols so that they can be reviewed rapidly once a relevant disaster occurs. As committees develop ways to adapt to disaster research, other review practices can be examined to identify innovative approaches to the challenges.

A key ethical issue to address with disaster research is whether a particular project should be conducted at this time with these particular participants. In the most immediate phase of an acute disaster, resources and energy should be focused on search and rescue. Researchers could hinder this, or divert scarce resources. At the same time, data should be collected as soon as possible to contribute to the evidence based for first responders. Ethics review committees should ensure justifications are provided for why a project needs to be done during the acute phase. Questions also need to be asked about whether disaster survivors have more important needs than to participate in research. For example, some have questioned whether children who survive war should be asked to participate in research when there are few resources available to help them with the mental health challenges of surviving war (Euwema et al. 2008).

With the move towards a more evidence-based approach to humanitarian work, international and non-governmental organisations (NGOs) are increasingly engaging in research and other evaluation programmes. Some of these organisations may have little experience with research or research ethics, and hence need additional support in developing and conducting projects. Much debate has occurred over what ‘counts’ as research and is therefore required to undergo formal research ethics approval. Rather than asking if a project is research or not, it is more important to identify the ethical issues in the project and ensure they are being addressed as carefully and thoroughly as possible (Chiumento et al. 2017). Needs assessments, projects that monitor or evaluate programmes, public health surveillance, and many other activities raise ethical issues whether or not they are formal academic research studies. At the same time, every project does not need to submit the same sort of detailed research ethics application as a randomised control trial of an experimental drug. Some sort of ethical evaluation should be conducted, and here again there is an opportunity to be innovative. Different formal and informal review mechanisms could be developed to support groups conducting different types of projects. The key concern should be that the ethical issues are being examined and addressed.

Also key here is that people in the communities from which participants will be sought are involved from the design of the project (O’Mathúna 2018). Too many ‘parachute projects’ have been conducted (some with ethical approval) whereby the project is designed completely by outsiders. Once everything has been decided, the team approaches the community only to identify a lack of interest in participating or that certain ethical challenges have been overlooked. Research in other cultures, especially in the midst of armed conflicts, is especially prone to such challenges. Review committees may need to encourage exploratory discussions between researchers and participant communities, or seek evidence of how such discussions have gone.

Unexpected ethical issues often arise in disaster research given the instability and complexity of its settings (O’Mathúna & Siriwardhana 2017). An approach where ethics review bodies give approval to projects and then have little or no engagement other than an annual report is especially inadequate in disasters. Researchers may be forced to make changes in fluid settings, or may encounter unexpected issues. Submitting amendments may not be practical or fast enough, when what is needed is advice and direction from those with research ethics expertise. Thus, initiatives are being developed to provide “on call” ethics advice.

This points to how disaster research often requires additional support and protection for researchers than other types of research. Researchers may enter danger zones (natural or violent) and may see or learn of horrors and atrocities. Researchers can be subjected to physical dangers or traumatised psychologically.. In addition to the normal stresses of conducting research, these additional factors can lead to mistakes and even ethical corner-cutting. Therefore, review committees need to carefully investigate how the physical and mental well-being of researchers will be protected and supported.

These are some examples of how research ethics needs to go beyond approval processes to mechanisms that promote ethical decision-making and personal integrity during research. One such project in which I am involved is seeking insight from humanitarian researchers into the ethical issues experienced in the field (http://PREAportal.org). We are also conducting a systematic review of such issues and collecting case studies from researchers. The goal is to produce a practical tool to facilitate learning lessons from disaster researchers and promote ethical decision-making within teams.

The world is increasingly experiencing disasters and conflicts and huge amounts of resources are put into responses. Some of these resources are put towards evaluating disaster responses, and developing evidence to support disaster responders. We can expect disaster research to increase and to be increasingly seen by research ethics committees. It is therefore important that ethics committees prepare themselves to respond to the ethical challenges that disaster research raises.

References

Chiumento, A., Rahman, A., Frith, L., Snider, L., & Tol, W. A. (2017). Ethical standards for mental health and psychosocial support research in emergencies: Review of literature and current debates. Globalization and Health 13(8). doi 10.1186/s12992-017-0231-y

Euwema, M., de Graaff, D., de Jager, A., & Kalksma-Van Lith, B. (2008). Research with children in war-affected areas. In: Research with Children, Perspectives and Practices, 2nd edition. Eds. Christensen, P. & James, A. Abingdon, UK: Routledge; 189-204.

O’Mathúna, D.  (2010). Conducting research in the aftermath of disasters: Ethical considerations. Journal of Evidence-Based Medicine 3(2):65-75.

O’Mathúna, D. (2018). The dual imperative in disaster research ethics. In: SAGE Handbook of Qualitative Research Ethics. Eds. Iphofen, R. & Tolich M. London: SAGE; 441-454.

O’Mathúna, D., & Siriwardhana, C. (2017). Research ethics and evidence for humanitarian health. Lancet 390(10109):2228-9.

Declaration of interests

Dónal O’Mathúna has been involved in research ethics for over twenty years. He was chair of the Research Ethics Committee at Dublin City University (DCU) for six years. In addition to his joint position at DCU and The Ohio State University, he is Visiting Professor of Ethics in the European Master in Disaster Medicine, Università del Piemonte Orientale, Italy. His research interests focus on ethical issues in disasters, in particular disaster research ethics. He was Chair of the EU-funded COST Action (2012-2016) on Disaster Bioethics (http://DisasterBioethics.eu) and is the Principal Investigator on the R2HC-funded research project, Post-Research Ethics Analysis (http://PREAportal.org).

Contributor
Dónal O’Mathúna, PhD
Associate Professor, School of Nursing & Human Sciences, Dublin City University, Ireland
Associate Professor, College of Nursing, The Ohio State University, Columbus, Ohio, USA
Dónal’s DCU profiledonal.omathuna@dcu.ie
Twitter: @domathuna
http://BioethicsIreland.ie

This post may be cited as:
O’Mathúna D. (2018, 26 February 2018) ‘Disaster Research and its Ethical Review’. Research Ethics Monthly. Retrieved from https://ahrecs.com/human-research-ethics/disaster-research-ethical-review

‘Don’t mention the c word: Covert research and the stifling ethics regime in the social sciences’0

 

Covert research is associated with deliberate deception in social research and equated with harm and risk to the researcher, the researched, the institution and the field. It is a controversial and emotive tradition that runs counter to and violates the received orthodoxy and professional mantra of informed consent enshrined in various ethical committees, institutional review boards and professional codes of practice. It is a methodological pariah and last resort position that is frowned upon, submerged, marginalized, stigmatized and effectively demonized (Calvey, 2017) in the social sciences. Indeed, to some in that community, to even contemplate a covert move is a belligerent step too far, which displays a cavalier attitude and belligerent lack of ethics. This view of deliberate misrepresentation (Erikson, 1967) accurately represents the received tone of much of the debate around covert research for a lengthy period of time. For many, despite the growing critical literature on informed consent as ideologically idealistic and disconnected from field realities, this derogatory and simplistic characterization of covert research has not altered.

I call for a fairer reading of the covert tradition and, hopefully in turn, a greater appreciation and recognition of the disruptive and invigorating role that covert research has brought to the social sciences. By using covert research, one enters into an ethical labyrinth and moral minefield, saturated in ethical dilemmas and puzzles, but it does not automatically follow that covert researchers have no ethical conscience. Often what are displayed are complex ethical self-regulations and guilt syndromes. Ethics then becomes a situated matter of application as well as a textbook understanding. What is partly called for is a broader and more nuanced way of understanding research ethics in practice.

From my own covert ethnography of bouncers in the night-time economy of Manchester, I experienced a series of ethical moments around witnessing violence and gaining deviant knowledge, that I managed in the field. Part of my sustained passing in the setting was accepting and not altering their moral code and sensibility about events, even though I might have a different personal interpretation. After my lived experience of six months as a covert nomadic bouncer doing different doors in the city, I felt that I had a richer appreciation of their subcultural values and cultural realities. Part of my investigation was in debunking the moral panics and stigma around bouncing being by one of them from the inside.

The classic covert exemplars of Cressey and his study of sex work, Festinger et al and their study of religious cults, Goffman’s study of Asylums, Milgram’s torture and pain experiments, Humphreys’ study of  public sexual deviance and Rosenhan’s pseudo-patient study of psychiatric diagnoses are found in most ethics textbooks and are clearly seminal and instructive work with a significant ongoing scholarship about them, which tend to conventionally frame the field of covert research. However, these classics, or what I call usual suspects, can also limit and narrow our understanding of the covert diaspora, with many other covert gems staying submerged. Also, some might erroneously draw the conclusion that covert research is an older tradition that is not conducted anymore. Indeed, the contemporary covert diaspora, on further investigation, is very diverse in the social sciences and spans several topics and fields including, and not definitively, crime, education, health, leisure, politics, religion and work.

On further granulation, these covert studies are rarely purist and employ more mixed strategies involving gate-keeping and key informants. Some studies, moreover, involve more unwitting types of concealment, rather than being designed deceptively. The diaspora then is more akin to a continuum rather than a fixed state of deception. Because the field of covert research is not incremental, integrated, or cross-fertilized, some of the studies have a stand-alone status in their respective fields. This is also compounded by the dearth of dedicated literature on covert research.

There has been a revival of sorts in covert research, although it is ultimately still likely to remain a relatively niche position. This revival, in part, comes from the significant rise in popularity of autoethnography and cyber ethnography, particularly forms of online lurking. A significant amount of them have covert dimensions, both witting and unwitting. A diverse range of sensitive and controversial topics has been explored by both methods.

The classic ethical question of do the means justify the ends often trades on an ideal-type view of informed consentand an inflated and exaggerated view of the potential harm, risk, and danger of covert research.

The hyper-alarmist response to covert research is partly based on a caricatured picture of covert research as heroic. Related to this, the image of the covert researcher is also tied up with versions of undercover research from popular culture in the sense of filmic and television sources, which can give an overly romanticized and glamorized view of the field. Covert research has also been a long accepted and normalized investigatory strategy for a range of practitioners and professionals, particularly in the police, the military and investigative journalism. Some of these covert investigations have had significant impact and influenced reform and change.

Covert research thus becomes a convenient scapegoat for those ethicists who quickly and strictly oppose it in any format, even if it could be used in a complementary way as part of a mixed or multiple methods approach. Covert work can be justified by providing a different type of insider insight, particularly in secretive settings and with illicit topics.

That is not to say that covert research can be zealously seen as a panacea. Nor is it the case that we no longer need robust ethical review processes and that ethical boards and committees are thus rejected and redundant. Such processes and organizations are useful and necessary but they need to refine, connect and adapt their policy sensibilities and mentalities to the messy nature of fieldwork realities.

In the current increasingly corporate climate of research, there has been what Hammersley (2010) cogently describes as creeping ethical regulation and the strangling of research, with covert research being particularly stifled. Miller (1995) described covert participation as the least used method and called for its reconsideration. Roulet et al (2017), in their more recent reconsideration of the value of covert research, argue that it has had a profound role in shaping the social sciences. Covert research can be a creative way, and certainly not the only way, to positively disrupt how we think about applied ethics. It offers an alternative way of doing situated ethics rather than being utterly devoid of them. Covert research is not to everyone’s taste, and will probably continue to offend some, but it should, nevertheless, be considered. Covert research will no doubt remain an object of both fear and fascination.

References

Calvey, D. (2017) Covert Research: The Art, Ethics and Politics of Undercover Fieldwork, London: Sage.

Erikson, K. T. (1967) ‘A comment on disguised observation in sociology’, Social Problems, 14 (4): 366–373.

Hammersley, M. (2010) ‘Creeping Ethical Regulation and the Strangling of Research’, Sociological Research Online, 15 (4) 16.

Miller, M. (1995) ‘Covert Participant Observation: Reconsidering the least used method’, Journal of Contemporary Criminal Justice, 11 (2): 97-105.

Roulet, T. J., Gill, M. J., Stenger, S and Gill, D. J. (2017) ‘Reconsidering the Value of Covert Research: The Role of Ambiguous Consent in Participant Observation’, Organizational Research Methods, 20 (3): 487-517.

Contributor
Dr David Calvey
Senior Lecturer | Manchester Metropolitan University | Staff profile | d.calvey@mmu.ac.uk

This post may be cited as:
Calvey D. (2017, 6 February 2018) ‘Don’t mention the c word: Covert research and the stifling ethics regime in the social sciences’. Research Ethics Monthly. Retrieved from https://ahrecs.com/human-research-ethics/dont-mention-c-word-covert-research-stifling-ethics-regime-social-sciences

What’s at risk? Who’s responsible? Moving beyond the physical, the immediate, the proximate, and the individual0

 

Building the Conversation

This month’s addition to the Building the Conversation series reflects upon how we approach risks beyond those that are physical, harm people other than a project’s participants and harms that are not immediate.

To some extent, when researchers reflect upon those harms associated with a project, they may well limit their assessment of risk to the here and now and to identifiable individuals. In addition, for projects in the medical sciences, those risks were long understood as predominantly physical in the form of injury, infection or disability and related to direct participants (e.g. persons who received an experimental pharmacological agent). This limited vision is not particularly surprising. One of the perverse consequences of requiring researchers to reflect on whether the potential benefits of research justify risk to participants is that some researchers are dissuaded from looking too carefully for risks and therefore avoid developing strategies for minimising these risks and mitigating possible harms. Even more perversely, this reluctance can trigger in human research ethics committees an unrealistic level of risk aversion.

It is vital that we remember that it is primarily the responsibility of researchers to identify, gauge and weigh the risk. Research ethics review bodies have the role of providing feedback to researchers to facilitate projects, not catch out researchers and chastise them for neglecting a risk. This is especially true if we do not have resource material to assist researchers with regard to this wider focus.

We need to improve our understanding of the complexity of risks, extending our vision to look beyond the physical, the immediate, the proximate, and the individual risks. At the same time, we need to review our understanding of on whom the responsibility for the identification, mitigation ad management of all of these risks should fall.

In recent decades, national human research ethics frameworks, such as the Australian National Statement on Ethical Conduct in Human Research (National Statement) (NHMRC 2007a) have augmented their original interest in physical harm with a much broader set of psychological, legal, economic and social harms. Documents such as the Australian Code for the Responsible Conduct of Research (NHMRC 2007b) cast this net wider still to include societal and environmental risks. However, the likelihood of incidence, the significance of the harm and the timing of such harms can be harder to predict, quantify and mitigate.

We are fuelling the potential for an adversarial climate (Israel et al., 2016) if we fail to provide researchers and our research ethics reviewers with guidance on how to approach such matters.

Human research ethics committees, guided by the frameworks in which they function, focus on immediate risks directly to the participants in a project. For example, the National Statement requires committees to be satisfied that “the likely benefit of the research must justify any risks or discomfort to participants.” (NHMRC, 2007, 10). Committees can feel less equipped to tackle risks that can affect participants after the active phase of a project, such as harms to the reputation and standing of a group that can come from the research output that is distributed long after data collection and perhaps years after the research ethics review.

Harm can also impact upon populations and social/professional/community groups much wider than the actual participants. For, example, research into the academic performance of children from schools in a low socio-economic area if reported insensitively by researchers or, indeed by the media, can further stigmatise the kids, and harm the reputation of the schools and teachers. Again, work on the informal income of members of marginalised communities might be used subsequently by government to target tax avoidance by the already vulnerable. Lastly, research on the attitudes of residents in coastal communities to climate change and rising sea levels can detrimentally effect the value of surrounding land. Indeed, some review processes require researchers to consider the possibility of adverse findings (both medical and non-medical in nature). Although the National Statement, (NHMRC, 2007 p.13), recognises risks of this kind, it leaves unclear whose responsibility they are.

Focussing on the rights of individuals from a Western liberal democratic perspective is unlikely to be helpful in other contexts, such as an Aboriginal and Torres Strait Islander community, in a cultural context where a Confucian approach would be more appropriate (Katyal, 2011), or even in some organisational settings where accountability is partly achieved through openness to external scrutiny in the form of research and evaluation. As a result, there have also been prompts to consider risks to identifiable third parties, groups, institutions, communities (Weijer et al., 1999). Values and Ethics and the Guidelines for Ethical Research in Australian Indigenous Studies (GERAIS) do recognise such matters might be considered by some potential participant pools on a collective basis and perhaps with an knowledge of a history of research abuse and exploitation of their communities and this attention to collective interests can be echoed in other work on research ethics and Indigenous peoples around the world (Israel, 2015).

This is perhaps one of the reasons why some minorities have produced their own research ethics guidance documents (for examples, see Hudson et al. (2010), Nordling (2017) and Islamic Council of Victoria (2017)). The value of this kind of guidan this on some for the moments that it clarifies that it is on researchers that the important responsibility lies to foresee, mitigate and manage these risks.

Another example of deleterious impacts from research that might not be immediately obvious to researchers, research ethics reviewers or research office staff arises in the category of ‘dual use’ research (Miller and Selgelid, 2007). This where a technique, technology or an apparently non-military discovery can be used for military or terrorist purposes – sometimes with devastating effect. Initially, the concern of biomedical scientists, the issue has also troubled anthropologists, geographers, sociologists, political scientists and international relations experts in the face of overt or covert funding by military or intelligence agencies (Israel, 2015). One of the growing challenges for a significant proportion of such work (e.g. quantum computing, computer security/intrusion/hacking, smart materials, computer vision and energy storage) is the work will not typically require research ethics or any other form of independent review. The existing model of human research ethics review is initially attractive as a response, but some reflection will quickly show that ethics committees are not likely to possess the expertise/information to identify the dual use and the work may be occurring in disciplines that have not built their capacity to think through the ethics of working with human participants.

Australia has a strengthened export control framework with regard to security classification, Defence Department permits/approvals and other requirements (e.g. data security). Many Australian universities have established dedicated teams and processes for this particular area of concern. It remains an area of community concern (see Hamilton and Joske, 2017). Such controls involve balancing academic freedom, a commitment to open science and the value of scientific discovery against (inter)national security, trade and diplomatic interests. Such a balancing exercise is plainly beyond the capacity required for human research ethics review, so that the responsibility needs to rely on another mechanism.

The implications of all of this are not trivial. This all requires a change in thinking for researchers, institutions, funding bodies, learned academies and regulators. Our attention to the potential harms from a project needs to encapsulate research outputs, impacts upon communities, persons who were not direct participants in the project as well as national interests. At the same time, the consideration of a project vis-à-vis the ethical principle of research merit needs to include broader societal benefits and contributions to knowledge that might also involve a much wider group and a longer timeframe than the ones to which we are accustomed. However, in order to reach a more sophisticated analysis of the balance between potential harms and benefits, we need to more clearly allocate responsibility for such risks and devise mechanisms that reassure the community that these responsibilities have been fulfilled.

In our view, merely widening the scope of the responsibilities of human research ethics committees to address all these risks could not only exacerbate the propensity for risk aversion, but could also distort their important focus on the welfare of research participants. The current review system needs to find ways of working constructively with other processes that build the capacity of researchers and their institutions to work with these broader risks and benefits.

Institutions must have resource materials for researchers and research ethics reviewers that have the primary objective of resourcing reflective practice and building expertise in risk assessment and mitigation. Researchers must recognise these matters as their primary responsibility and research ethics reviewers must focus upon facilitation not enforcing compliance. We have written about how institutions can implement such an approach (Israel and Allen, in press).

In short, we cannot afford to ignore these challenges. Instead, we should take innovation seriously and seek constructive solutions.

References

Allen, G. and Israel, M. (in press, 2018) Moving beyond Regulatory Compliance: Building Institutional Support for Ethical Reflection in Research. In Iphofen, R. and Tolich, M. (eds) The SAGE Handbook of Qualitative Research Ethics. London: Sage.

Hamilton, C. and Joske, A. (2017) Australian taxes may help finance Chinese military capability. The Australian. http://www.theaustralian.com.au/news/inquirer/australian-taxes-may-help-finance-chinese-military-capability/news-story/6aa9780c6a907b24993d006ef25f9654 [accessed 31 December 2017).

Hudson, M., Milne, M., Reynolds, P., Russell, K. and Smith B. (2010) Te Ara Tika. Guidelines for Māori Research Ethics: A Framework for Researchers and Ethics Committee Members. http://www.hrc.govt.nz/sites/default/files/Te%20Ara%20Tika%20Guidelines%20for%20Maori%20Research%20Ethics.pdf (accessed 29 December 2017).

Islamic Council of Victoria (2017) ICV Guidelines for Muslim Community-University Research Partnerships. http://www.icv.org.au/new/wp-content/uploads/2017/09/ICV-Community-University-Partnership-Guidelines-Sept-2017.pdf (accessed 29 December 2017)

Israel, M. (2015) Research Ethics and Integrity for Social Scientists: Beyond Regulatory Compliance. London: Sage.

Israel, M., Allen, G. and Thomson, C. (2016) Australian Research Ethics Governance: Plotting the Demise of the Adversarial Culture. In van den Hoonaard, W. and Hamilton, A. (eds) The Ethics Rupture: Exploring Alternatives to Formal Research-Ethics Review. Toronto: University of Toronto Press. pp 285-316. http://www.utppublishing.com/The-Ethics-Rupture-Exploring-Alternatives-to-Formal-Research-Ethics-Review.html

Katyal, K.R. (2011) Gate-keeping and the ambiguities in the nature of ‘informed consent’ in Confucian societies. International Journal of Research & Method in Education 34(2): 147-159.

Miller, S. and Selgelid, M. (2007) Ethical and philosophical consideration of the dual use dilemma in the biological sciences. Science and Engineering Ethics 13: 523-580.

NHMRC (2007a) National Statement on Ethical Conduct in Human Research. http://www.nhmrc.gov.au/guidelines-publications/e72 (accessed 29 December 2017).

NHMRC (2007b) Australian Code for the Responsible Conduct of Research. http://www.nhmrc.gov.au/guidelines-publications/r39 (accessed 29 December 2017).

Nordling, L. (2017) San people of Africa draft code of ethics for researchers. Science, March 17. http://www.sciencemag.org/news/2017/03/san-people-africa-draft-code-ethics-researchers (accessed 29 December 2017).

Weijer, C., Goldsand, G. and Emanuel, E.J. (1999) Protecting communities in research: Current guidelines and limits of extrapolation. Nature Genetics 23: 275-280.

Contributors
Dr Gary Allen
Senior consultant | AHRECS | Gary’s AHRECS biogary.allen@ahrecs.com

Prof. Mark Israel
Senior consultant | AHRECS | Mark’s AHRECS biomark.israel@ahrecs.com

This post may be cited as:
Allen G. and Israel M. (2018, 1 February 2018) What’s at risk? Who’s responsible? Moving beyond the physical, the immediate, the proximate, and the individual. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/whats-risk-whos-responsible-moving-beyond-physical-immediate-proximate-individual

Ethical research with young children: Whose research, whose agenda?0

 

The last decade has seen increased global focus on research with young children within and across a range of disciplines (Farrell, 2016). The period, birth to age eight years, known colloquially as the ‘early years’ or ‘early childhood’, has been conceptualized as pivotal to young children’s current wellbeing and future life chances and, in turn, the increasing focus of research within the disciplines of education, health, human services, developmental science, law, economics and neuroscience. New theoretical perspectives, expanded methodological approaches and fresh lines of inquiry are being brought to bear on the ethical design, conduct and dissemination of early childhood research (Kagan, Tisdall & Farrell, 2016). The global focus on ethical research with young children has been prefaced, to some extent, by global recognition of the rights of children to participation and protection in everyday activities (Tisdall, 2012). Despite the focus on children and their rights, child research is largely an adult enterprise serving adult-driven agendas, albeit driven by genuine adult concern for children’s rights to participation and protection. On the one hand, it is driven by the imperative to protect children, quite rightly, from risk of harm, often drawing upon normative views of child development and young children’s pre-competence or developmental incapacity to consent to, participate in or withdraw from research. On the other hand, there is a growing quest to listen to and consult with children as competent and active research participants, while still enacting protective ethical obligations towards them (Alderson & Morrow, 2011). While much child research claims to be with children rather than on, for or about children, the enterprise is typically driven by the agendas of research productivity, performativity and empirical leverage of research within policy and provision for young children – by and for adults. The upshot is that some children, families and communities increasingly experience the over-burden of research, their demographic characteristics making them prime sites for research and their participation essential for attaining research targets and outputs. The enterprise of ethical research with children calls for ethical consideration of the adult performance-driven agendas that drive much child research. It calls for consideration of the agency and active participation of children, families in communities in ways that respect their decision to engage in the research and greater affordances of co-constructed research for children and adults than is currently the case.

References

Alderson, P., & Morrow, V. (2011). The ethics of research with children and young people. A practical handbook (2nd Ed).London: Sage.

Farrell, A. (2016). Ethics in early childhood research. In A. Farrell, S.L. Kagan & E.K.M. Tisdall (Eds.), Sage handbook of early childhood research (pp. 163-184). London: Sage.

Kagan, S.L., Tisdall, E.K.M., & Farrell, A. (2016). Future directions in early childhood research: Addressing next-step imperatives, In A. Farrell, S.L. Kagan & E.K.M. Tisdall (Eds.), Sage handbook of early childhood research (pp. 517-534). London: Sage.

Tisdall, E.K.M, (2012). Taking forward child and young people’s participation. In M Hill, G. Head, A. Lockyer, B. Reid & R. Taylor (Ed), Children’s services: Working together (pp.151-162). Harlow: Pearson.

Contributor
Professor Ann Farrell
Head, School of Early childhood and Inclusive Education
Faculty of Education Queensland University of Technology
QUT staff page a.farrell@qut.edu.au

 

This post may be cited as:
Farrell A. (2017, 23 October 2017) Ethical research with young children: Whose research, whose agenda? Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/ethical-research-young-children-whose-research-whose-agenda

Page 2 of 512345