Meta-analyses will only produce more reliable results if the studies are good.
While science as a whole has produced remarkably reliable answers to a lot of questions, it does so despite the fact that any individual study may not be reliable. Issues like small errors on the part of researchers, unidentified problems with materials or equipment, or the tendency to publish positive answers can alter the results of a single paper. But collectively, through multiple studies, science as a whole inches towards an understanding of the underlying reality.
Similar findings have been found before, but it’s important to rearticulate the value of negative results to science and practice. This speaks to poor research culture and training. University education, and even high and primary school, do not acknowledge that failure is part of discovery. The rewards for ‘success’ are high and it is very tempting for students that can lead to research misconduct.
But a meta-analysis only works its magic if the underlying data is solid. And a new study that looks at multiple meta-analyses (a meta-meta-analysis?) suggests that one of those factors—our tendency to publish results that support hypotheses—is making the underlying data less solid than we like.
It’s possible for publication bias to be a form of research misconduct. If a researcher is convinced of their hypothesis, they might actively avoid publishing any results that would undercut their own ideas. But there’s plenty of other ways for publication bias to set in. Researchers who find a weak effect might hold off on publishing in the hope that further research would be more convincing. Journals also have a tendency to favor publishing positive results—one where a hypothesis is confirmed—and avoid publishing studies that don’t see any effect at all. Researchers, being aware of this, might adjust the publications they submit accordingly.