It’s been a hell of a year for science scandals. In July, Stanford University president Marc Tessier-Lavigne, a prominent neuroscientist, announced he would step down after an investigation, prompted by reporting by the Stanford Daily, found that members of his lab had manipulated data or engaged in “deficient scientific practices” in five academic papers on which he’d been the principal author. A month beforehand, internet sleuths publicly accused Harvard professor Francesca Gino—a behavioral scientist studying, among other things, dishonesty—of fraudulently altering data in several papers. (Gino has denied allegations of misconduct.) And the month before, Nobel Prize–winner Gregg Semenza, a professor at Johns Hopkins School of Medicine, had his seventh paper retracted for “multiple image irregularities.”
This piece that Mother Jones published in November 2023 takes a look at the question of the reason why so many research outputs are being retracted. As this item observes, despite startling international cases, retractions still amount to much less than 1% of the published work. Even with the number Retraction Watch co-founder Ivan Orasky believes are likely to require retraction, it is an incredibly small proportion of the total volume of published academic work. This piece discusses rewarding/funding the sleuths who detect and call out dodgy papers.
Retractions, which can happen for a variety of reasons, including falsification of data, plagiarism, bad methodology, or other errors, aren’t necessarily a modern phenomenon: As Oransky wrote for Nature last year, the oldest retraction in their database is from 1756, a critique of Benjamin Franklin’s research on electricity. But in the digital age, whistleblowers have better technology to investigate and expose misconduct. “We have better tools and greater awareness,” says Daniel Kulp, chair of the UK-based Committee on Publication Ethics. “There are in some sense more people looking with that critical mindset.” (It’s a bit like how in the United States, the rise of cancer diagnoses in the last two decades may in part be attributable to better, earlier cancer screenings.)
In fact, experts say there should probably be more retractions: A 2009 meta-analysisof 18 surveys of scientists, for instance, found that about 2 percent of respondents admitted to having “fabricated, falsified, or modified data or results at least once,” the authors write, with slightly more than 33 percent admitting to “other questionable research practices.” Surveys like these have led the Retraction Watch team to estimate that 1 out of 50 papers ought to be retracted on ethical grounds or for error. Currently, less than 1 out of 1,000 get removed. (And if it seems like behavioral research and neuroscience are particularly retraction-prone fields, that’s likely because journalists tend to focus on those cases, Oransky says; “Every field has problematic research,” he adds.)