Is the sky falling? Trust in academic research in 2015

For anyone that has been paying even the slightest attention to scholarly publishing over the past few years, it will have been impossible to ignore what seems to be a growing number of astonishing advances published in prestigious journals presented at press conferences by proud scientists, which is then followed by questioning of said findings first on twitter, then on blogs, then in newspapers, with finally the very same scientists facing up to the same media, but this time to have to report that their findings were not correct, maybe even fabricated. Corrections follow, sometimes quickly, sometimes slowly, of whole or part of the published research. Those outside academia wonder what is going on.

In the background it might actually seem that the issue is worse. For every dramatic case that hits the headlines, there are many more where researchers only make their findings partially available or when asked can’t find or make available to others the data that underlie their findings – not because of fraud or fabrication but because of sloppiness, or poor training, or simply a lack of proper structures in place around the research.

What’s going on? Underlying it all is the often poorly appreciated fact that academic advances (especially in science) rarely, if ever, advance in clear quantum leaps. More often research findings are messy and incremental. Yet despite this fact, current ways of measuring academics and academic institutions incentivise – even require – academics to compete for publication in highly selective journals and punish those that don’t, and thus reward behaviour that fits with this system. This issue was acknowledged explicitly by the UK Nuffield Council on Bioethics in their report, the Culture of Scientific Research in the UK which noted that the “‘pressure to publish’ can encourage the fabrication of data, altering, omitting or manipulating data, or ‘cherry picking’ results to report.”

However, the good news is that reform is in the air about how science is assessed and viewed. This reform is  partly derived from external pressures resulting from the high profile cases, but more constructively, and probably sustainably, arise from the many conversations circulating over the past several years among academics and more enlightened publishers, policy makers and funders.

Such initiatives have started with an increasing understanding that measuring worth and rewarding tenure on the basis primarily of a single, commercial, measure of journals’ (and by implication scientists’) worth – the Thomson Reuters journal impact factor is now out-dated (if it was ever valid). An important element of the change is the technical development of practical alternatives such as new article level and alternative metrics, which aim to measure multiple different ways of impact, (e.g. those from PLOS, Impact Story, Altmetic). Crucially, these technical developments are now increasingly backed by international agreement that change is needed, highlighted by DORA, and the UK’s HEFCE.

Other initiatives, such as governments’ (including the Australian Government’s) interest in wider societal impact and especially business competiveness – none of which seem to be well predicted by current journal-level metrics – could, and probably should, also lead to an unpicking of the dominance of older metrics. Equally important however, is the culture of openness that is now increasingly permeating academia, which includes open access to research but more crucially in this context also openness to the research process itself, including to the processes and underlying data. And all of this feeds into another increasingly importantly concept, that of transparency in reporting and reproducibility, which can counteract waste in research and the changes needed for that.

So we are at a time of great change, when the technology that supports open availability of data and publications, new methods of research and academic assessment and a prioritizing of reproducibility are all moving to a research system that has the potential to better support society’s needs. How quickly these opportunities are all taken up remains to be seen – and points to the harder challenge – that of changing the mindset of individuals and institutions.

Dr Virginia Barbour, COPE Chair
Brisbane, Australia
email: co********@***************cs.org
web: http://publicationethics.org/

These comments reflect my personal opinions and not necessarily those of COPE or my employers

This blog may be cited as:
Barbour, V (2015, 26 July) Is the sky falling? Trust in academic research in 2015. AHRECS Blog. Retrieved from https://ahrecs.com/research-integrity/is-the-sky-falling-trust-in-academic-research-in-2015

Contact us