Fakery spans “beautified” data, photoshopped images, and “paper mills.” Experts and institutions are employing tools to spot deceptive research and mitigate
its reach.
LIKE MUCH OF the internet, PubPeer is the sort of place where you might want to be anonymous. There, under randomly assigned taxonomic names like Actinopolyspora biskrensis (a bacterium) and Hoya camphorifolia (a flowering plant), “sleuths” meticulously document mistakes in the scientific literature. Though they write about all sorts of errors, from bungled statistics to nonsensical methodology, their collective expertise is in manipulated images: clouds of protein that show suspiciously crisp edges, or identical arrangements of cells in two supposedly distinct experiments. Sometimes, these irregularities mean nothing more than that a researcher tried to beautify a figure before submitting it to a journal. But they nevertheless raise red flags.
Fraud in research is becoming harder to spot. Artificial intelligence, the creation of images and paper mills are making it even harder. Humans alone are starting to find it harder to detect fraud. Humans and AI tools working together is showing promise. Fortunately, we know that a resourcing reflective practice that focuses on research culture can reduce the temptation and the space for fraud to occur.
Nevertheless, good scientific practices can effectively reduce the impact of fraud—that is, outright fakery—on science, whether or not it is ever discovered. Fraud “cannot be excluded from science, just like we cannot exclude murder in our society,” says Marcel van Assen, a principal investigator in the Meta-Research Center at the Tillburg School of Social and Behavioral Sciences. But as researchers and advocates continue to push science to be more open and impartial, he says, fraud “will be less prevalent in the future.”

SUBSCRIPTION REQUIRED