ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

Research Integrity

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The ethical petri-dish: recommendations for the design of university science curricula0


Dr Jo-Anne Kelder, Senior Lecturer, Curriculum Innovation and Development, University of Tasmania,
Professor Sue Jones, Honorary Researcher, School of Natural Sciences, University of Tasmania,
Professor Liz Johnson, DVC of Education, Deakin University,
Associate Professor Tina Acuna, ADL&T College of Sciences and Engineering, University of Tasmania,

Ethics (thinking and practice) is intrinsic to the nature of science. Ethical practices within science-related professions are mandated by policies, frameworks, standards and cultural norms. A scientist should also consider the broader implications for society when applying scientific knowledge..

.Does our laboratory start working to develop a vaccine for Covid-19 or continue working on that potential cure for childhood leukemia? What will happen to the endangered Giant Freshwater Lobster if we remodel the hydrology of that major river so farmers in North-West Tasmania can grow more potatoes? Should we approve the use of GM technology to develop Vitamin A-rich rice?.

Science graduates must be equipped to contribute to such complex debates, and empowered to make scientific decisions within a sound ethical framework (Johnson, 2010).

The Science Standards Statement (Jones, Yates and Kelder, 2011), the national benchmark for bachelor-level science degrees in Australia, specifies that graduates will demonstrate a coherent understanding of science, and be able to explain the role and relevance of science in society. society (TLO 1: Jones et al., 2011: p.12). Furthermore, they will be equipped to understand and work within ethical frameworks, and “have some understanding of their social and cultural responsibilities as they investigate the natural world.” (TLO 5.3: Jones et al., 2011: p.15).

The argument that there is ‘no space’ for ethics in the science curriculum is no longer valid (Booth and Garrett, 2004; McGowan 2013). However there remain significant barriers to the teaching and assessment of ethical knowledge, skills and capabilities in undergraduate science curricula. We summarise these as: debate and dissent around what should be taught, who should teach ethical thinking, and how should it be taught and assessed.

It’s not just about plagiarism

Ethics in science falls into two broad categories:

  1. Ethics in the practice of science
  2. Ethics in the application of science.

Ethics in the practice of science relates to integrity in research management (including data collection, analysis and presentation); plagiarism, and authorship. Ethics curricula must ensure students’ familiarity with relevant legislative frameworks such as the National Statement on Ethical Conduct in Human Research. In professionally oriented/applied disciplines such as Agriculture and Environmental Science students must also be prepared for working ethically in a business environment and to understand their ethical and legal obligations as workplace leaders (Botwright-Acuna and Able, 2016).

Ethics in the application of science requires a broader and deeper perspective: appreciating and accepting responsibility for the impacts of scientific work upon society (Evers, 2001; Schultz, 2014). Graduates need to be aware that the ethical frameworks within which science is practised are not static, but adapt as social norms change. They must understand how their personal ethical perspectives interact with and may clash with, formal mandated frameworks, and be prepared to engage in debate around the ethical implications of applying discovery science in the real world. They must be prepared to defend ethical decisions and to appreciate that others may hold conflicting views. As Evers puts it: “the study of ethics should therefore be an integral part of the education and training of all scientists with the purpose of increasing future scientists’ ethical competence” (2001: p. 97).

Recommendation – that students are encouraged to debate, discuss, and appreciate that people will hold different points of view on, ethical questions.

Teachers may need some training

Practising scientists who themselves operate within relevant ethical frameworks are best placed to guide students about ethics in the practice of science (Kabasenche, 2014). However, while some scientists have taken up the teaching challenge of including ethics explicitly in their curriculum, this is not yet mainstream (Booth and Garrett, 2004). Most science academics are not themselves formally trained in ethical thinking (Johansen and Harris, 2000) and may express legitimate concern that they are not best placed to design and teach curricula on ethics (van Leeuwen, Lamberts, Newitt and Errington, 2007).

Recommendation – that science faculties provide professional development and community of practice opportunities to teaching staff to ensure that they have the confidence, skills and knowledge to teach ethical practice within a science curriculum.

There is a strong argument for a collaborative, interdisciplinary approach, with both science academics and philosophically trained ethicists involved in teaching ‘science ethics’ (Kabasenche, 2014). The scientist contributes expertise in the relevant science and their understanding of the ethical practice of science, while the philosopher brings critical thinking skills and decision-making tools that support ethical understandings and analysis of relative consequences. For example, in The Responsible Scientist, Forge (2008) argues that responsibility in scientific work has implications beyond intended outcomes, and includes taking into account foreseen and foreseeable outcomes.

Recommendation – that science faculties pursue opportunities for collaborative, interdisciplinary design and delivery of ‘science ethics’ across the undergraduate science curriculum.

It’s not just for the first year students

Teaching ethics to science students must do more than ensuring that first years are familiar with university policies on plagiarism and academic integrity (Botwright-Acuna et al., 2016). Ethics must be an explicitly assessed component of the curriculum at each level of study, and overtly aligned to the core science curriculum. Assessment tasks must distinguish between students’ knowledge of relevant ethical frameworks, and their ability to apply those frameworks in practice.

For example, an assessment task for third level Zoology students models an Animal Ethics application: students construct a scientific research question within an ethical framework, and justify that research in language accessible to lay people (Jones and Edwards, 2013). In the undergraduate course ‘Communities of Practice in Biochemistry and Molecular Biology’, students develop research skills alongside their capacity for ethical analysis of the impacts of science on society (Keiler et al., 2017) while in a subject on ‘Energy and Sustainability’, students develop a national energy plan that addresses equity issues as well as technical and political feasibility (McGowan, 2013). Schultz (2014) suggests several strategies for assessing Chemistry students’ knowledge of ethical thinking, such as writing a Code of Conduct for practising chemists.

Recommendation – that ethics is a compulsory and explicitly assessed component of a bachelor-level science curriculum, and that students are exposed to ethical thinking in the context of science from their first year onwards.

It’s everybody’s business

Good practice is a teaching team approach to curriculum design, delivery and scholarly evaluation (Kelder et al., 2017; TEQSA, 2018). A whole-of-curriculum approach will involve team members meeting regularly to discuss and coordinate connecting the ethical implications of scientific knowledge and practice being taught; to ensure that ethical thinking is embedded at each curriculum level; to scaffold and develop learning from introductory to assured level. At the broader level, the science curriculum must provide a framework within which students are supported to develop personal and professional responsibility for their learning and later professional life (Loughlin, 2013).

Recommendation – that the degree curriculum is discussed and agreed upon by the whole teaching team prior to curriculum design (and ongoing, as it matures) to ensure that students’ learning is built upon, and assessed coherently and developmentally.

Recommendation – that scholarship promoting and recommending content and delivery methods, and, especially, effective assessment strategies for the teaching of ethics to science undergraduates, is encouraged and rewarded.


Booth, J. M. and Garrett, J. M. (2004). Instructors’ practices in and attitudes toward teaching ethics in the genetics classroom. Genetics, 168(3), 1111-1117.

Botwright Acuña, T.L. and Able, A.J. (Eds.). (2016). Good Practice Guide: Threshold Learning Outcomes for Agriculture. Sydney, Australia: Office for Learning and Teaching.

Evers, K. (2001). Standards for ethics and responsibility in science: An analysis and evaluation of their content, background and function. International Council for Science, Paris.

Forge, J. (2008). The Responsible Scientist: A Philosophical Inquiry. University of Pittsburgh Press.

Johnson, J (2010). Teaching Ethics to Science Students: Challenges and a Strategy. In: Education and Ethics in the Life Sciences, Rappert, B. (ed.) ANU E Press, 197–213.

Jones, S. M. and A. Edwards (2013). Placing ethics within the formal science curriculum: a case study. In: Frielick, S. et al. (Eds.) Research and Development in Higher Education: the place of learning and teaching, 36 (pp 243-252). Auckland, New Zealand, 1-4 July 2013.

Jones, S. M., Yates, B. F. and Kelder, J.-A. (2011). Learning and Teaching Academic Standards Project: Science Learning and Teaching Academic Standards Statement. Sydney: Australian Learning and Teaching Council.

Kabasenche W. P. (2014). The Ethics of Teaching Science and Ethics: A Collaborative Proposal. Journal of Microbiology & Biology Education, 15(2), 135–138.

Kelder, J.-A., Carr, A. R. and Walls, J. (2017). Evidence-based Transformation of Curriculum: a Research and Evaluation Framework. Paper presented at the 40th Annual Conference of the Higher Education Research and Development Society of Australasia (HERDSA), Sydney.

Keiler, K. C., Jackson, K. L., Jaworski, L., Lopatto, D. and Ades, S. E. (2017). Teaching broader impacts of science with undergraduate research. PLoS biology, 15(3), e2001318.

Loughlin, W. (2013). Good Practice Guide (Science) Threshold Learning Outcome 5: Personal and professional responsibility.

McGowan, A. H. (2013). Teaching Science and Ethics to Undergraduates: A Multidisciplinary Approach. Science and Engineering Ethics, 19, 535–543.

National Statement on Ethical Conduct in Human Research.

TEQSA (12 December 2018). “Guidance Note – Scholarship” Version 2.5.

van Leeuwen, B., Lamberts, R., Newitt, P. and Errington, S. (2012, October). Ethics, issues and consequences: conceptual challenges in science education. In Proceedings of The Australian Conference on Science and Mathematics Education.

This post may be cited as:

Kelder, J., Jones, S., Johnson, E & Botwright-Acuna, T. (18 June 2020) The ethical petri-dish: recommendations for the design of university science curricula Research Ethics Monthly. Retrieved from:



To date, we are delighted to report the extended team is virus-free. Our best wishes go out to any member of the Human Research Ethics/Research Integrity community who are currently battling the awful pandemic. To the first responders, clinical staff on the frontlines and researchers working on a vaccine, thank you for your service.

Like the majority of small businesses in Australasia, AHRECS has taken a bit of a hit financially. Please consider becoming a subscriber, whether institutional ($350/yr) or individually (from USD1/month). Your support during this difficult time would be hugely appreciated.

Send any enquiries to

Lost time may never be found again but is it time to talk about the duration of ethics approvals?0


“To everything there is a season, and a time to every purpose” a time to report on ethical conduct, a time to renew an approval, or a time to face misconduct proceedings.

Dr Gary Allen

What is the length of ethics approvals that your HREC grants?  In this article, I will discuss this question and some of the reasons for choosing approval periods.

A related question is, under what circumstances should an ethics approval be withdrawn?  Can/should research ethics review bodies withdraw approval because of extended/repeated failure by a researcher to provide an ethical conduct report?

Australia is unlike the US where the conventional interpretation of the Common Rule is that ethics approvals are of one year.  Accordingly, US researchers must provide annual ethical conduct reports to maintain ethics approval and avoid needing to make a fresh application.

In Australia, the duration of approval is not specified by the National Statement and the only clear Australia-wide external requirement to provide reports in a certain time is paragraph 5.5.5 of the National Statement which provides that researchers should report to ethics review bodies at least annually.  As a result, approval duration is likely to be dictated by institutional policy and some have adopted maximum duration periods.  A short (e.g. 12 months) approval period and renewal requirement is one lever committees can use to compel researcher compliance to provide evidence that the needs for approval periods are being met.

There are, I suggest, four such needs that are served by a choice of duration of an ethics approval, namely:

  • Compelling a report from a researcher and allowing a review body to confirm that –
    1. a project is being conducted as per the approval, and
    2. the welfare and interests of participants are still being adequately provided for.
  • Providing an opportunity to reflect on any changes to national standards or institutional policies or pertinent cases that warrant a rethink of approvals.

Researchers can typically seek a long duration ethics approval because:

  • The design calls for repeated data collection across an extended period, such as a longitudinal ethnographic study;
  • The work is a component of a program of work focussed on a cure for a chronic condition; or
  • The work intends to compile an archive of biospecimens, data, document samples, audio-visual material or other items of historical/cultural significance.

The maximum duration of a research ethics approval would also appear to be connected to how long an HREC has operated and the amount of work the committee is undertaking. In Australia, institutional decisions on the matter can also be associated with changes in national ethics review requirements that occurred in 1999, 2007 and 2018 (and beyond).

Like other aspects of human research ethics practice in Australia, the approach to duration has reflected practice in the United States.  While Australia does not have the same kind of regulatory framework as the US where failing to maintain ethics approval can have consequences for institutions, the use of single year approvals is probably used as a way to promote adherence to the institution’s ethical conduct reporting requirement.

While understandable, such short-term approvals can punish conscientious researchers because of an institutional response to recalcitrant researchers.

However, early in a research ethics committee’s operation, it is not uncommon for it to grant approvals with durations of between one and three years.

This can reflect the committee’s confidence:

  1. in its role and decisions;
  2. and trust that researchers understand their responsibilities and will abide within the scope of the ethics approval; and
  3. that projects will progress as per applications or researchers will contact the institution’s research office if the unexpected occurs.

A low workload of a committee can serve as an incentive for short duration approvals and longer duration/longitudinal work is chunked out into two or more applications. Increasing the number of approvals may not allow the committee and research team to develop expertise before the committee commits to an extended period of research. Alternatively, a committee might be tempted to inflate its number of approvals artificially to attract either resources or credibility.

Conversely, when a research ethics committee is very busy, there may be more incentive to grant longer approvals to minimise the number of times the committee needs to review renewals of long duration projects.

Given, the National Statement (2007 updated 2018) is currently silent on the issue of the duration of ethics approvals, it might appear the Australian national framework should not impact on approvals. However, there is both a predictable impact and a real reason to rethink our current approach to the duration of approvals.

At this stage an update to Section 4 of the National Statement might be released in the next six months and an update to Section 5 will move out of the planning stage shortly.

Some institutions and committees tie the duration of ethics approvals and forced renewal around the timeframe during which the national arrangements might change (and perhaps inter alia institutional policy).  In Australia this might equate to around a five year approval duration.

The changed approach to updates to the National Statement means that such a cycle might not be especially helpful.

Suggested change to duration and monitoring procedures

We recommend institutions and HRECs do the following:

  1. Adopt a policy setting that the conduct of human research without prior ethics approval may be considered a breach of the Australian Code for the Responsible Conduct of Research (2018) and of the institution’s research integrity arrangements. This would be consistent with the Investigation good practice guide.
  2. Adopt a policy setting that any proposed change to a project must be submitted for prior review, otherwise the conduct of a project in a manner not in adherence to its ethics approval may be considered in breach of the Australian Code for the Responsible Conduct of Research (2018) and the institution’s research integrity arrangements.
  3. Adopt the practice of reminding researchers of their responsibility –
    1. to consider and safeguard the welfare of research participants
    2. to remain reflective of whether the risks of a project are justified by its benefits
    3. to remain reflective of the degree to which the project addresses the core ethical principles of the National Statement
    4. notify the HREC of any changes with regard to 3a-c.
    5. Notify the HREC if any participant raises a concern about the ethical design or conduct of a project. Including notifying the HREC if any participants withdraw consent because of a concern about ethical matters.
  4. Adopt a policy that a researcher who fails to meet the responsibilities described at 3 may be considered in breach of the Australian Code for the Responsible Conduct of Research (2018) and the institution’s research integrity arrangements.
  5. Adopt a policy that an ethics approval can be approved for the planned duration of a project.
  6. Adopt a policy that researchers must submit an ethical conduct report every 12 months during the currency of an ethics approval. Extended/repeated failure to do so may be considered a breach of the Australian Code for the Responsible Conduct of Research (2018) and the institution’s research integrity arrangements.
  7. Adopt a practice of timed reminders to researchers to provide overdue ethical conduct reports, culminating in breach proceedings[1].
  8. Adopt a policy and practice that every five years a clearance is active the research office/HREC assess whether there are circumstances that require a new ethics review of a project.

In most cases, a new review should not be required, but a standardised, clear checklist should be used to determine whether a new review is required. Subscribers to or will find a suggested checklist for conducting such a check.

On this basis, I suggest research ethics review bodies/research offices can and should withdraw approval because of extended/repeated failure by a researcher to provide an ethical conduct report.  This however must be based upon documented policy and procedure.  It must also be foreshadowed in ethics approval notifications, report reminders and resource material.

[1] The approach here might be constrained by the research management system the institution is using.  This includes usefully tracking correspondence between the researchers and the research office.

This post may be cited as:
Allen, G. (3 March 2020) Lost time may never be found again but is it time to talk about the duration of ethics approvals?. Research Ethics Monthly. Retrieved from:

The F-word, or how to fight fires in the research literature0


Professor Jennifer Byrne | University of Sydney Medical School and Children’s Hospital at Westmead


At home, I am constantly fighting the F-word. Channelling my mother, I find myself saying things like ‘don’t use that word’, ‘not here’, ‘not in this house’. As you can probably gather, it’s a losing battle.

Research has its own F-words – ‘falsification’, ‘fabrication’, different colours of the overarching F-word, ‘fraud’. Unlike the regular F-word, most researchers assume that there’s not much need to use the research versions. Research fraud is considered comfortably rare, the actions of a few outliers. This is the ‘bad apple’ view of research fraud – that fraudsters are different, and born, not made. These rare individuals produce papers that eventually act as spot fires, damaging their fields, or even burning them to the ground. However, as most researchers are not affected, the research enterprise tends to just shrug its collective shoulders, and carry on.

But, of course, there’s a second explanation for research fraud – the so-called ‘bad barrel’ hypothesis – that research fraud can be provoked by poorly regulated, extreme pressure environments. This is a less comfortable idea, because this implies that regular people might be tempted to cheat if subjected to the right (or wrong) conditions. Such environments could result in more affected papers, about more topics, published in more journals. This would give rise to more fires within the literature, and more scientific casualties. But again, these types of environments are not considered to be common, or widespread.

But what if the pressure to publish becomes more widely and acutely applied? The use of publication quotas has been described in different settings as being associated with an uptick in numbers of questionable publications (Hvistendahl 2013; Djuric 2015; Tian et al. 2016). When publication expectations harden into quotas, more researchers may feel forced to choose between their principles and their (next) positions.

This issue has been recently discussed in the context of China (Hvistendahl 2013; Tian et al. 2016), a population juggernaut with scientific ambitions to match. China’s research output has risen dramatically over recent years, and at the same time, reports of research integrity problems have also filtered into the literature. In biomedicine, these issues again have been linked with publication quotas in both academia and clinical medicine (Tian et al. 2016). A form of contract cheating has been alleged to exist in the form of paper mills, or for-profit organisations that provide research content for publications (Hvistendahl 2013; Liu and Chen 2018). Paper mill services allegedly extend to providing completed manuscripts to which authors or teams can add their names (Hvistendahl 2013; Liu and Chen 2018).

I fell into thinking about paper mills by accident, as a result of comparing five very similar papers that were found to contain serious errors, questioning whether some of the reported experiments could have been performed (Byrne and Labbé 2017). With my colleague Dr Cyril Labbé, we are now knee deep in analysing papers with similar errors (Byrne and Labbé 2017; Labbé et al. 2019), suggesting that a worrying number of papers may have been produced with some kind of undeclared help.

It is said that to catch a thief, you need to learn to think like one. So if I were running a paper mill, and wanted to hide many questionable papers in the biomedical literature, what would I do? The answer would be to publish papers on many low-profile topics, using many authors, across many low-impact journals, over many years.

In terms of available topics, we believe that the paper mills may have struck gold by mining the contents of the human genome (Byrne et al. 2019). Humans carry 40,000 different genes of two main types, the so-called coding and non-coding genes. Most human genes have not been studied in any detail, so they provide many publication opportunities in fields where there are few experts to pay attention.

Human genes can also be linked to cancer, allowing individual genes to be examined in different cancer types, multiplying the number of papers that can be produced for each gene (Byrne and Labbé 2017). Non-coding genes are known to regulate coding genes, so non-coding and coding genes can also be combined, again in different cancer types.

The resulting repetitive manuscripts can be distributed between many research groups, and then diluted across the many journals that publish papers examining gene function in cancer (Byrne et al. 2019). The lack of content experts for these genes, or poor reviewing standards, may help these manuscripts to pass into the literature (Byrne et al. 2019). And as long as these papers are not detected, and demand continues, such manuscripts can be produced over many years. So rather than having a few isolated fires, we could be witnessing a situation where many parts of the biomedical literature are silently, solidly burning.

When dealing with fires, I have learned a few things from years of mandatory fire training. In the event of a laboratory fire, we are taught to ‘remove’, ‘alert’, ‘contain’, and ‘extinguish’. I believe that these approaches are also needed to fight fires in the research literature.

We can start by ‘alerting’ the research and publishing communities to manuscript and publication features of concern. If manuscripts are produced to a pattern, they should show similarities in terms of formatting, experimental techniques, language and/or figure appearance (Byrne and Labbé 2017). Furthermore, if manuscripts are produced in a large numbers, they could appear simplistic, with thin justifications to study individual genes, and almost non-existent links between genes and diseases (Byrne et al. 2019). But most importantly, manuscripts produced en masse will likely contain mistakes, and these may constitute an Achilles heel to enable their detection (Labbé et al. 2019).

Acting on reports of unusual shared features and errors will help to ‘contain’ the numbers and influence of these publications. Detailed, effective screening by publishers and journals may detect more problematic manuscripts before they are published. Dedicated funding would encourage active surveillance of the literature by researchers, leading to more reports of publications of concern. Where these concerns are upheld, individual publications can be contained through published expressions of concern, and/or ‘extinguished’ through retraction.

At the same time, we must identify and ‘remove’ the fuels that drive systematic research fraud. Institutions should remove both unrealistic publication requirements, and monetary incentives to publish. Similarly, research communities and funding bodies need to ask whether neglected fields are being targeted for low value, questionable research. Supporting functional studies of under-studied genes could help to remove this particular type of fuel (Byrne et al. 2019).

And while removing, alerting, containing and extinguishing, we should not shy away from thinking about and using any necessary F-words. Thinking that research fraud shouldn’t be discussed will only help this to continue (Byrne 2019).

The alternative could be using the other F-word in ways that I don’t want to think about.


Byrne JA (2019). We need to talk about systematic fraud. Nature. 566: 9.

Byrne JA, Grima N, Capes-Davis A, Labbé C (2019). The possibility of systematic research fraud targeting under-studied human genes: causes, consequences and potential solutions. Biomarker Insights. 14: 1-12.

Byrne JA, Labbé C (2017). Striking similarities between publications from China describing single gene knockdown experiments in human cancer cell lines. Scientometrics. 110: 1471-93.

Djuric D (2015). Penetrating the omerta of predatory publishing: The Romanian connection. Sci Engineer Ethics. 21: 183–202.

Hvistendahl M (2013). China’s publication bazaar. Science. 342: 1035–1039.

Labbé C, Grima N, Gautier T, Favier B, Byrne JA (2019). Semi-automated fact-checking of nucleotide sequence reagents in biomedical research publications: the Seek & Blastn tool. PLOS ONE. 14: e0213266.

Liu X, Chen X (2018). Journal retractions: some unique features of research misconduct in China. J Scholar Pub. 49: 305–319.

Tian M, Su Y, Ru X (2016). Perish or publish in China: Pressures on young Chinese scholars to publish in internationally indexed journals. Publications. 4: 9.

This post may be cited as:
Byrne, J. (18 July 2019) The F-word, or how to fight fires in the research literature. Research Ethics Monthly. Retrieved from: