ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesNews

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

‘Fraud and Misconduct in Research’ – Inside Higher Ed (Nick Roll | December 2017)0

Posted by Admin in on December 6, 2017
 

Using a database of 750 cases of research fraud from around the world, professors examine fraud as a phenomenon, tracing its history and trajectory and looking at what can be done about it.

When a researcher is busted for fraud, the exposure often trickles out from source to source. Whether it’s exposed by an institution, professional association, journal or the media, word gets out.

Depending on how big a deal a case is, it might make international headlines. Other times, the fraud is dealt with quietly. But why does it occur, and why does it keep occurring? From an environmental and organizational level, what can be done to combat research fraud? Is there something to be learned by examining fraud at a level beyond just the case-by-case stories, sometimes packaged in shock journalism with explosive headlines?

Those are the types of questions that caught the attention of Nachman Ben-Yehuda and Amalya Oliver-Lumerman, professors in the Department of Sociology and Anthropology at the Hebrew University in Jerusalem — inspiring them to write their own catalog of the history and ramifications of research fraud in Fraud and Misconduct in Research: Detection, Investigation and Organizational Response (University of Michigan Press).

Read the rest of this discussion piece

AI Research is in Desperate Need of an Ethical Watchdog – Wired (Sophia Chen | September 2017)0

Posted by Admin in on December 5, 2017
 

ABOUT A WEEK ago, Stanford University researchers posted online a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training it with tens of thousands of photographs from dating sites, the algorithm could perform better than a human judge in specific instances. For example, when given photographs of a gay white man and a straight white man taken from dating sites, the algorithm could guess which one was gay more accurately than actual people participating in the study.* The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

The increasing number of projects such as this highlight the degree to which AI research can be an unanticipated source of harm. Currently such work is unlikely to be submitted for research ethics review and the researchers will probably be unfamiliar with ethical considerations. Existing research ethics committees and institutional arrangements will be ill equipped to be helpful, but some kind of framework is required to provide researchers feedback on the ethics of their work.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.”Kosinski has received e-mail death threats.
.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.
.

Read the rest of this discussion piece

US court issues injunction against OMICS to stop “deceptive practices” – Retraction Watch (Andrew P. Han | November 2017)0

Posted by Admin in on November 27, 2017
 

A US government agency has won an initial court ruling against OMICS, which the government says will help stop the academic publisher’s deceptive business practices.

Today, the Federal Trade Commission (FTC) announced that it won a preliminary injunction in September in its lawsuit against Srinubabu Gedela, CEO of OMICS Group and other companies.

The lawsuit, filed in August 2016, accused the defendants — which include Gedela and OMICS Group, iMedPub, and Conference Series — of deceptive business practices related to journal publishing and scientific conferences. The FTC alleged the defendants used the names of prominent researchers to draw conference attendees, even though the researchers had not agreed to participate; misled readers about whether articles had been peer reviewed; failed to provide authors with transparent information about publishing fees prior to submission; and presented misleading “impact factors” for journals.

Read the rest of this news story
Read the short Inside Higher Ed Quick Take 
FTC Charges Academic Journal Publisher OMICS Group Deceived Researchers

Communicating risk in human terms – The Ethics Blog (Pär Segerdahl | October 2017)0

Posted by Admin in on November 26, 2017
 

The concept of risk used in genetics is a technical term. For the specialist, risk is the probability of an undesired event, for example, that an individual develops some form of cancer. Risk is usually stated as a percentage.

This discussion piece raises an important issue for describing risk to participants across all (sub)disciplines. Talking in percentages is unlikely to be meaningful to most potential participants.

It is well known that patients have difficulties to access the probability notion of risk. What do their difficulties mean?
.

Technical notions, which experts use in their specialist fields, usually have high status. The attitude is: this is what risk really is. Based on such an attitude, people’s difficulties mean: they have difficulties to understand risk. Therefore, we have to help them understand, by using educational tools that explain to them what we mean (we who know what risk is).
.

Read the rest of this discussion piece

0