For anyone that has been paying even the slightest attention to scholarly publishing over the past few years, it will have been impossible to ignore what seems to be a growing number of astonishing advances published in prestigious journals presented at press conferences by proud scientists, which is then followed by questioning of said findings first on twitter, then on blogs, then in newspapers, with finally the very same scientists facing up to the same media, but this time to have to report that their findings were not correct, maybe even fabricated. Corrections follow, sometimes quickly, sometimes slowly, of whole or part of the published research. Those outside academia wonder what is going on.
In the background it might actually seem that the issue is worse. For every dramatic case that hits the headlines, there are many more where researchers only make their findings partially available or when asked can’t find or make available to others the data that underlie their findings – not because of fraud or fabrication but because of sloppiness, or poor training, or simply a lack of proper structures in place around the research.
What’s going on? Underlying it all is the often poorly appreciated fact that academic advances (especially in science) rarely, if ever, advance in clear quantum leaps. More often research findings are messy and incremental. Yet despite this fact, current ways of measuring academics and academic institutions incentivise – even require – academics to compete for publication in highly selective journals and punish those that don’t, and thus reward behaviour that fits with this system. This issue was acknowledged explicitly by the UK Nuffield Council on Bioethics in their report, the Culture of Scientific Research in the UK which noted that the “‘pressure to publish’ can encourage the fabrication of data, altering, omitting or manipulating data, or ‘cherry picking’ results to report.”
However, the good news is that reform is in the air about how science is assessed and viewed. This reform is partly derived from external pressures resulting from the high profile cases, but more constructively, and probably sustainably, arise from the many conversations circulating over the past several years among academics and more enlightened publishers, policy makers and funders.
Such initiatives have started with an increasing understanding that measuring worth and rewarding tenure on the basis primarily of a single, commercial, measure of journals’ (and by implication scientists’) worth – the Thomson Reuters journal impact factor is now out-dated (if it was ever valid). An important element of the change is the technical development of practical alternatives such as new article level and alternative metrics, which aim to measure multiple different ways of impact, (e.g. those from PLOS, Impact Story, Altmetic). Crucially, these technical developments are now increasingly backed by international agreement that change is needed, highlighted by DORA, and the UK’s HEFCE.
Other initiatives, such as governments’ (including the Australian Government’s) interest in wider societal impact and especially business competiveness – none of which seem to be well predicted by current journal-level metrics – could, and probably should, also lead to an unpicking of the dominance of older metrics. Equally important however, is the culture of openness that is now increasingly permeating academia, which includes open access to research but more crucially in this context also openness to the research process itself, including to the processes and underlying data. And all of this feeds into another increasingly importantly concept, that of transparency in reporting and reproducibility, which can counteract waste in research and the changes needed for that.
So we are at a time of great change, when the technology that supports open availability of data and publications, new methods of research and academic assessment and a prioritizing of reproducibility are all moving to a research system that has the potential to better support society’s needs. How quickly these opportunities are all taken up remains to be seen – and points to the harder challenge – that of changing the mindset of individuals and institutions.
Dr Virginia Barbour, COPE Chair
Brisbane, Australia
email: cope_chair@publicationethics.org
web: http://publicationethics.org/
These comments reflect my personal opinions and not necessarily those of COPE or my employers
This blog may be cited as:
Barbour, V (2015, 26 July) Is the sky falling? Trust in academic research in 2015. AHRECS Blog. Retrieved from https://ahrecs.com/research-integrity/is-the-sky-falling-trust-in-academic-research-in-2015
2 thoughts on “Is the sky falling? Trust in academic research in 2015”
Research malpractice is an important issue – funding mis-spent, wrong directions taken on the basis of dodgy data. Vigilance and appropriate guidelines and education are therefore important. However, while acknowledging that there is a a problem of research malpractice, arguably driven by perverse incentives, it also needs to be recognized that each year millions of articles are published around the world. (A paper by Mark Ware, http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf, gives the figure of 1.8-1.9 million articles in the Science, Technology, Medical area alone.) Most of these will not be earth shattering, but the overwhelming majority are united in being ethically unexceptionable. So, Is the sky falling? If the sky where a ceiling, you may need to call the plasterer, but not the structural engineer.
The increasing pressure on our research staff to simultaneously perform cutting-edge research, publish well, and to demonstrate societal impact will challenge institutions to adopt a balanced suite of metrics and incentives to encourage appropriate behaviour. My perception is that the issues described in this article around corrections, retractions and data quality, might arise from the growing trend to share the ‘impact story’ before complete or even partial impact has been realised. Research staff are encouraged to engage more with the media (traditional and social) laying a seductive trap to talk up results as the next breakthrough when the vast majority is incremental advancement of knowledge. Institutions recognise the potential to improve communication of their impact and societal value but need to guide staff carefully in order to maintain the academic integrity of their research. The irony is that there might be increasing pressure over time to publish less to maintain public confidence in the rigour of our research systems.