ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesAnalysis

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

A toast to the error detectors – Nature (Simine Vazire | December 2019)0

Posted by Admin in on March 24, 2020
 

Let 2020 be the year in which we value those who ensure that science is self-correcting.

Last month, I got a private Twitter message from a postdoc bruised by the clash between science as it is and how it should be. He had published a commentary in which he pointed out errors in a famous researcher’s paper. The critique was accurate, important and measured — a service to his field. But it caused him problems: his adviser told him that publishing such criticism had crossed a line, and he should never do it again.

Scientists are very quick to say that science is self-correcting, but those who do the work behind this correction often get accused of damaging their field, or worse. My impression is that many error detectors are early-career researchers who stumble on mistakes made by eminent scientists, and naively think that they are helping by pointing out those problems — but, after doing so, are treated badly by the community.

Stories of scientists showing unwarranted hostility to error detectors are all too common. Yes, criticism, like science, should be done carefully, with due diligence and a sharp awareness of personal fallibility. Error detectors need to keep conversations focused on concrete facts, and should be open to benign explanations for apparent problems.

Read the rest of this discussion piece

Scientists reveal what they learnt from their biggest mistakes – Nature Index (Gemma Conroy | March 2020)0

Posted by Admin in on March 23, 2020
 

How retractions can be a way forward.

Be it a botched experiment or a coding error, mistakes are easily made but harder to handle, particularly if they find their way into a published paper.

Although retracting a paper due to an error may not seem a desirable career milestone, it is seen as important for building trust within the research community and upholding scientific rigor.

2017 study found that authors who retract their papers due to a mistake earn praise from peer-reviewers and other researchers for their honesty.

Below are four lessons from researchers who have retracted flawed papers.

Read the rest of this discussion piece

Defining predatory journals and responding to the threat they pose: a modified Delphi consensus process (Papers: Samantha Cukier, et al | February 2020)0

Posted by Admin in on February 18, 2020
 

Abstract
Objective
To conduct a Delphi survey informing a consensus definition of predatory journals and publishers.

Design
This is a modified three-round Delphi survey delivered online for the first two rounds and in-person for the third round. Questions encompassed three themes: (1) predatory journal definition; (2) educational outreach and policy initiatives on predatory publishing; and (3) developing technological solutions to stop submissions to predatory journals and other low-quality journals.

Participants

Post Beall’s List (and truth be told while the list was live) an agreed definition of predatory publishers (questionable publishers) is essential.  Also a good sense of their impact is  very important.  This recent open access paper is a great step in the right direction.

Through snowball and purposive sampling of targeted experts, we identified 45 noted experts in predatory journals and journalology. The international group included funders, academics and representatives of academic institutions, librarians and information scientists, policy makers, journal editors, publishers, researchers involved in studying predatory journals and legitimate journals, and patient partners. In addition, 198 authors of articles discussing predatory journals were invited to participate in round 1.
.

Results
A total of 115 individuals (107 in round 1 and 45 in rounds 2 and 3) completed the survey on predatory journals and publishers. We reached consensus on 18 items out of a total of 33 to be included in a consensus definition of predatory journals and publishers. We came to consensus on educational outreach and policy initiatives on which to focus, including the development of a single checklist to detect predatory journals and publishers, and public funding to support research in this general area. We identified technological solutions to address the problem: a ‘one-stop-shop’ website to consolidate information on the topic and a ‘predatory journal research observatory’ to identify ongoing research and analysis about predatory journals/publishers.
.

Conclusions
In bringing together an international group of diverse stakeholders, we were able to use a modified Delphi process to inform the development of a definition of predatory journals and publishers. This definition will help institutions, funders and other stakeholders generate practical guidance on avoiding predatory journals and publishers.

Cukier S., Lalu M., Bryson GL., Cobey,. K. D., Grudniewicz, A. &  Moher, D (2020) Defining predatory journals and responding to the threat they pose: a modified Delphi consensus process. BMJ Open 10:e035561. doi: 10.1136/bmjopen-2019-035561
Publisher (Open Access): https://bmjopen.bmj.com/content/10/2/e035561.full

Tell it like it is – Nature Human Behaviour (Editorial | January 2020)0

Posted by Admin in on February 2, 2020
 

Every research paper tells a story, but the pressure to provide ‘clean’ narratives is harmful for the scientific endeavour.

Research manuscripts provide an account of how their authors addressed a research question or questions, the means they used to do so, what they found and how the work (dis) confirms existing hypotheses or generates new ones. The current research culture is characterized by significant pressure to present research projects as conclusive narratives that leave no room for ambiguity or for conflicting or inconclusive results.

We have seen this in grant applications where, in several instances, the applicants almost deliberately ignored work of others that contradicted their hypotheses or findings rather than to place their own work in context.

The pressure to produce such clean narratives, however, represents a significant threat to validity and runs counter to the reality of what science looks like.
.

Prioritizing conclusive over transparent research narratives incentivizes a host of questionable research practices: hypothesizing after the results are known, selectively reporting only those outcomes that confirm the original predictions or excluding from the research report studies that provide contradictory or messy results. Each of these practices damages credibility and presents a distorted picture of the research that prevents cumulative knowledge.
.

Read the rest of this discussion piece

0