ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesData management

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

(Queensland, Australia) Ex-judge to investigate controversial marine research – Times Higher Education (John Ross | January 2020)0

Posted by Admin in on January 11, 2020
 

An Australian university has launched an investigation into the research record of a discredited scientist it educated, as findings by academics who supervised her doctoral training are challenged.

James Cook University said it has appointed an external panel to look for evidence of misconduct in the research conducted by marine biologist Oona Lönnstedt between 2010 and 2014, when she was undertaking PhD studies at the Queensland institution.

The university said the panel’s as yet unidentified members include “eminent academics with expertise in field work, marine science and ethics” and a former federal court judge.

Read the rest of this news story

(Queensland, Australia) Analysis challenges slew of studies claiming ocean acidification alters fish behavior – Scienced0

Posted by Admin in on January 11, 2020
 

Over the past decade, marine scientists published a series of studies warning that humanity’s burgeoning carbon dioxide (CO2) emissions could cause yet another devastating problem. They reported that seawater acidified by rising CO2—already known to threaten organisms with carbonate shells and skeletons, such as corals—could also cause profound, alarming changes in the behavior of fish on tropical reefs. The studies, some of which made headlines, found that acidification can disorient fish, make them hyperactive or bolder, alter their vision, and lead them to become attracted to, rather than repelled by, the smell of predators. Such changes, researchers noted, could cause populations to plummet.

But in a Nature paper published today, researchers from Australia, Canada, Norway, and Sweden challenge a number of those findings. In a major, 3-year effort that studied six fish species, they could not replicate three widely reported behavioral effects of ocean acidification. The replication team notes that many of the original studies came from the same relatively small group of researchers and involved small sample sizes. That and other “methodological or analytical weaknesses” may have led the original studies astray, they argue.

“It’s an exceptionally thorough replication effort,” says Tim Parker, a biologist and an advocate for replication studies at Whitman College in Walla Walla, Washington. Marine scientist Andrew Esbaugh of the University of Texas, Austin, agrees that it’s “excellent, excellent work.”

Read the rest of this discussion piece

Gazing into the Abyss of P-Hacking: HARKing vs. Optional Stopping – R-Bloggers (Angelika Stefan | November 2019)0

Posted by Admin in on December 26, 2019
 

Almost all researchers have experienced the tingling feeling of suspense that arises right before they take a look at long-awaited data: Will they support their favored hypothesis? Will they yield interesting or even groundbreaking results? In a perfect world (especially one without publication bias), the cause of this suspense should be nothing else but scientific curiosity. However, the world, and specifically the incentive system in science, is not perfect. A lot of pressure rests on researchers to produce statistically significant results. For many researchers, statistical significance is the cornerstone of their academic career, so non-significant results in an important study can not only question their scientific convictions but also crash their hopes of professional promotion. (Although, fortunately things are changing for the better).

Now, what does a researcher do confronted with messy, non-significant results? According to several much-cited studies (for example John et al., 2012; Simmons et al., 2011), a common reaction is to start sampling again (and again, and again, …) in the hope that a somewhat larger sample size can boost significance. Another reaction is to wildly conduct hypothesis tests on the existing sample until at least one of them becomes significant (see for example: Simmons et al., 2011; Kerr, 1998 ). These practices, along with some others, are commonly known as p-hacking, because they are designed to drag the famous p-value right below the mark of .05 which usually indicates statistical significance. Undisputedly, p-hacking works (for a demonstration try out the p-hacker app). The two questions we want to answer in this blog post are: How does it work and why is that bad for science?

As many people may have heard, p-hacking works because it exploits a process called alpha error accumulation which is covered in most introductory statistics classes (but also easily forgotten again). Basically, alpha error accumulation means that as one conducts more and more hypothesis tests, the probability increases that one makes a wrong test decision at least once. Specifically, this wrong test decision is a false positive decision or alpha error, which means that you proclaim the existence of an effect although, in fact, there is none. Speaking in statistical terms, an alpha error occurs when the test yields a significant result although the null hypothesis (“There is no effect”) is true in the population. This means that p-hacking leads to the publication of an increased rate of false positive results, that is, studies that claim to have found an effect although, in fact, their result is just due to the randomness of the data. Such studies will never replicate.

Read the rest of this discussion piece

Friday afternoon’s funny – Disaster recovery plan0

Posted by Admin in on December 20, 2019
 

Cartoon by Don Mayne www.researchcartoons.com

Like most of Don’s work, this cartoon poses an important question for researchers: Do you have a disaster recovery plan?  Is the plan stored away from your office and data?  If not, you’re tempting fate.

0