ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesPublication ethics

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Research intelligence: how to sniff out errors and fraud – Times Higher Education (Jack Grove | January 2020)0

Posted by Admin in on January 27, 2020

A growing number of data detectives are on the hunt for sloppy science and dodgy statistics. Jack Grove examines the methods they use

These days it is not just co-authors or peer reviewers who are checking journal papers for errors: a growing number of self-appointed fraud busters are scanning scientific literature for flaws.

This unpaid and mostly anonymous endeavour has led to the retractions of hundreds of papers and even disciplinary action where wrongdoing is exposed.

So how can scholars catch errors when reviewing others’ papers, or when double-checking their own work or that of collaborators?

Read the rest of this discussion piece

Evaluating ethics oversight during assessment of research integrity (Papers: Andrew Grey, et al | November 2019)0

Posted by Admin in on January 24, 2020

We provide additional information relevant to our previous publication on the quality of reports of investigations of research integrity by academic institutions. Despite concerns being raised about ethical oversight of research published by a group of researchers, each of the four institutional investigations failed to determine and/or report whether ethics committee approval was obtained for the majority of publications assessed.

Grey, A., Bolland, M. & Avenell, A. (2019) Evaluating ethics oversight during assessment of research integrity. Research Integrity and Peer Review 4, 22 (2019).
Publisher (Open Access):

(US) New eLife editor Michael Eisen wants to shake up scientific publishing – Berkeley News (Robert Sanders | April 2019)0

Posted by Admin in on January 24, 2020

The University of California system’s recent decision to walk away from negotiations with scholarly journal publishing giant Elsevier highlights once again the many problems within the scientific publishing business, a $10 billion-per-year worldwide enterprise that is the bedrock of modern science. Publishers like Elsevier, Springer — which publishes the high-impact journal Nature —and dozens of other for-profit companies and nonprofit scientific societies are an essential part of the give-and-take of science, offering a place to publish and share new results. But they also charge for scientists and the public to read those results, much of which the public originally funded through federal agencies such as the National Institutes of Health (NIH) and the National Science Foundation (NSF). The UC most recently paid Elsevier $11 million for a year’s worth of access to its journals, which include the well-known medical journal The Lancet and more than 2,500 lesser-known titles, from Poetics to Fungal Biology.

Michael Eisen, a professor of molecular and cell biology and a Howard Hughes Medical Institute investigator, has done his part to disrupt the stodgy business, which he thinks not only takes advantage of authors and universities, but distorts the process of science. As a founder 19 years ago of the first open access journal, PLOS (Public Library of Science), he sought to establish a new business model where scientists pay to publish, while anyone can view the results for free. Other journals slowly moved in that direction, but even today, only about 20 percent of all published research is open access, and almost none of the papers appearing in high profile publications like Nature, Science and PNAS(Proceedings of the National Academy of Sciences) can be read by the public without charge.

Appointed last month the editor-in-chief of the open access journal eLife — Berkeley Nobel Laureate Randy Schekman is stepping down as founding editor — Eisen has a new platform to shake up the field of science publishing and help make it serve scientists and the public.

Read the rest of this discussion piece

Meta-analysis study indicates we publish more positive results – ARS Technica (John Timmer | December 2019)0

Posted by Admin in on January 13, 2020

Meta-analyses will only produce more reliable results if the studies are good.

While science as a whole has produced remarkably reliable answers to a lot of questions, it does so despite the fact that any individual study may not be reliable. Issues like small errors on the part of researchers, unidentified problems with materials or equipment, or the tendency to publish positive answers can alter the results of a single paper. But collectively, through multiple studies, science as a whole inches towards an understanding of the underlying reality.

Similar findings have been found before, but it’s important to rearticulate the value of negative results to science and practice.  This speaks to poor research culture and training. University education, and even high and primary school, do not acknowledge that failure is part of discovery. The rewards for ‘success’ are high and it is very tempting for students that can lead to research misconduct.

A meta-analysis is a way to formalize that process. It takes the results of multiple studies and combines them, increasing the statistical power of the analysis. This may cause exciting results seen in a few small studies to vanish into statistical noise, or it can tease out a weak effect that’s completely lost in more limited studies.

But a meta-analysis only works its magic if the underlying data is solid. And a new study that looks at multiple meta-analyses (a meta-meta-analysis?) suggests that one of those factors—our tendency to publish results that support hypotheses—is making the underlying data less solid than we like.

Publication bias

It’s possible for publication bias to be a form of research misconduct. If a researcher is convinced of their hypothesis, they might actively avoid publishing any results that would undercut their own ideas. But there’s plenty of other ways for publication bias to set in. Researchers who find a weak effect might hold off on publishing in the hope that further research would be more convincing. Journals also have a tendency to favor publishing positive results—one where a hypothesis is confirmed—and avoid publishing studies that don’t see any effect at all. Researchers, being aware of this, might adjust the publications they submit accordingly.


Read the rest of this discussion piece