ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesGuest Post: Interesting Versus True? Measuring Transparency and Reproducibility of Biomedical Articles – Scholarly Kitchen (Anita Bandrowski and Martijn Roelandse | December 2019)

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Guest Post: Interesting Versus True? Measuring Transparency and Reproducibility of Biomedical Articles – Scholarly Kitchen (Anita Bandrowski and Martijn Roelandse | December 2019)

Published/Released on December 18, 2019 | Posted by Admin on January 8, 2020
 


View full details | Go to resource


Much time has been spent thinking about honing the results published in scientific papers toward the interesting. Studies with short titles get more newspapers interested; studies about coffee or wine are the superstars of Twitter. But in reality, most science is not so flashy. Studies frequently take years to complete and represent careful work by scientists, which, when well considered, provides us with very important insights about the world we live in, as well as solutions to global problems from climate change to disease.

There are multiple ways to measure how much attention is being paid to a study. For example, the number of times that a study is cited, and by extension the average citation rate of a journal is a common metric. Various alternative measures of “popularity” (altmetrics), such as the number of times that it is tweeted have been devised. However, until now there has never been an easy way to measure any aspect of the quality of a scientific study.

Looking broadly across the literature in various meta-analyses, scientists have determined that some methods do impact study quality. For example, MacLeod and colleagues have been studying which factors are associated with overinflation of results for several decades. The short version of their findings is that factors that reduce investigator bias, such as experimenter blinding and randomizing subjects properly, are associated with about a 50% change in effect size.

Read the rest of this discussion piece



Related Reading

Resources Menu

Research Integrity


Human Research Ethics

0