Professor Jennifer Byrne | University of Sydney Medical School and Children’s Hospital at Westmead
At home, I am constantly fighting the F-word. Channelling my mother, I find myself saying things like ‘don’t use that word’, ‘not here’, ‘not in this house’. As you can probably gather, it’s a losing battle.
Research has its own F-words – ‘falsification’, ‘fabrication’, different colours of the overarching F-word, ‘fraud’. Unlike the regular F-word, most researchers assume that there’s not much need to use the research versions. Research fraud is considered comfortably rare, the actions of a few outliers. This is the ‘bad apple’ view of research fraud – that fraudsters are different, and born, not made. These rare individuals produce papers that eventually act as spot fires, damaging their fields, or even burning them to the ground. However, as most researchers are not affected, the research enterprise tends to just shrug its collective shoulders, and carry on.
But, of course, there’s a second explanation for research fraud – the so-called ‘bad barrel’ hypothesis – that research fraud can be provoked by poorly regulated, extreme pressure environments. This is a less comfortable idea, because this implies that regular people might be tempted to cheat if subjected to the right (or wrong) conditions. Such environments could result in more affected papers, about more topics, published in more journals. This would give rise to more fires within the literature, and more scientific casualties. But again, these types of environments are not considered to be common, or widespread.
But what if the pressure to publish becomes more widely and acutely applied? The use of publication quotas has been described in different settings as being associated with an uptick in numbers of questionable publications (Hvistendahl 2013; Djuric 2015; Tian et al. 2016). When publication expectations harden into quotas, more researchers may feel forced to choose between their principles and their (next) positions.
This issue has been recently discussed in the context of China (Hvistendahl 2013; Tian et al. 2016), a population juggernaut with scientific ambitions to match. China’s research output has risen dramatically over recent years, and at the same time, reports of research integrity problems have also filtered into the literature. In biomedicine, these issues again have been linked with publication quotas in both academia and clinical medicine (Tian et al. 2016). A form of contract cheating has been alleged to exist in the form of paper mills, or for-profit organisations that provide research content for publications (Hvistendahl 2013; Liu and Chen 2018). Paper mill services allegedly extend to providing completed manuscripts to which authors or teams can add their names (Hvistendahl 2013; Liu and Chen 2018).
I fell into thinking about paper mills by accident, as a result of comparing five very similar papers that were found to contain serious errors, questioning whether some of the reported experiments could have been performed (Byrne and Labbé 2017). With my colleague Dr Cyril Labbé, we are now knee deep in analysing papers with similar errors (Byrne and Labbé 2017; Labbé et al. 2019), suggesting that a worrying number of papers may have been produced with some kind of undeclared help.
It is said that to catch a thief, you need to learn to think like one. So if I were running a paper mill, and wanted to hide many questionable papers in the biomedical literature, what would I do? The answer would be to publish papers on many low-profile topics, using many authors, across many low-impact journals, over many years.
In terms of available topics, we believe that the paper mills may have struck gold by mining the contents of the human genome (Byrne et al. 2019). Humans carry 40,000 different genes of two main types, the so-called coding and non-coding genes. Most human genes have not been studied in any detail, so they provide many publication opportunities in fields where there are few experts to pay attention.
Human genes can also be linked to cancer, allowing individual genes to be examined in different cancer types, multiplying the number of papers that can be produced for each gene (Byrne and Labbé 2017). Non-coding genes are known to regulate coding genes, so non-coding and coding genes can also be combined, again in different cancer types.
The resulting repetitive manuscripts can be distributed between many research groups, and then diluted across the many journals that publish papers examining gene function in cancer (Byrne et al. 2019). The lack of content experts for these genes, or poor reviewing standards, may help these manuscripts to pass into the literature (Byrne et al. 2019). And as long as these papers are not detected, and demand continues, such manuscripts can be produced over many years. So rather than having a few isolated fires, we could be witnessing a situation where many parts of the biomedical literature are silently, solidly burning.
When dealing with fires, I have learned a few things from years of mandatory fire training. In the event of a laboratory fire, we are taught to ‘remove’, ‘alert’, ‘contain’, and ‘extinguish’. I believe that these approaches are also needed to fight fires in the research literature.
We can start by ‘alerting’ the research and publishing communities to manuscript and publication features of concern. If manuscripts are produced to a pattern, they should show similarities in terms of formatting, experimental techniques, language and/or figure appearance (Byrne and Labbé 2017). Furthermore, if manuscripts are produced in a large numbers, they could appear simplistic, with thin justifications to study individual genes, and almost non-existent links between genes and diseases (Byrne et al. 2019). But most importantly, manuscripts produced en masse will likely contain mistakes, and these may constitute an Achilles heel to enable their detection (Labbé et al. 2019).
Acting on reports of unusual shared features and errors will help to ‘contain’ the numbers and influence of these publications. Detailed, effective screening by publishers and journals may detect more problematic manuscripts before they are published. Dedicated funding would encourage active surveillance of the literature by researchers, leading to more reports of publications of concern. Where these concerns are upheld, individual publications can be contained through published expressions of concern, and/or ‘extinguished’ through retraction.
At the same time, we must identify and ‘remove’ the fuels that drive systematic research fraud. Institutions should remove both unrealistic publication requirements, and monetary incentives to publish. Similarly, research communities and funding bodies need to ask whether neglected fields are being targeted for low value, questionable research. Supporting functional studies of under-studied genes could help to remove this particular type of fuel (Byrne et al. 2019).
And while removing, alerting, containing and extinguishing, we should not shy away from thinking about and using any necessary F-words. Thinking that research fraud shouldn’t be discussed will only help this to continue (Byrne 2019).
The alternative could be using the other F-word in ways that I don’t want to think about.
Byrne JA (2019). We need to talk about systematic fraud. Nature. 566: 9.
Byrne JA, Grima N, Capes-Davis A, Labbé C (2019). The possibility of systematic research fraud targeting under-studied human genes: causes, consequences and potential solutions. Biomarker Insights. 14: 1-12.
Byrne JA, Labbé C (2017). Striking similarities between publications from China describing single gene knockdown experiments in human cancer cell lines. Scientometrics. 110: 1471-93.
Djuric D (2015). Penetrating the omerta of predatory publishing: The Romanian connection. Sci Engineer Ethics. 21: 183–202.
Hvistendahl M (2013). China’s publication bazaar. Science. 342: 1035–1039.
Labbé C, Grima N, Gautier T, Favier B, Byrne JA (2019). Semi-automated fact-checking of nucleotide sequence reagents in biomedical research publications: the Seek & Blastn tool. PLOS ONE. 14: e0213266.
Liu X, Chen X (2018). Journal retractions: some unique features of research misconduct in China. J Scholar Pub. 49: 305–319.
Tian M, Su Y, Ru X (2016). Perish or publish in China: Pressures on young Chinese scholars to publish in internationally indexed journals. Publications. 4: 9.
This post may be cited as:
Byrne, J. (18 July 2019) The F-word, or how to fight fires in the research literature. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/the-f-word-or-how-to-fight-fires-in-the-research-literature