ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Filter by Keywords
Research ethics committees
Research integrity

Resource Library

Research Ethics MonthlyAbout Us


Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AI Research is in Desperate Need of an Ethical Watchdog – Wired (Sophia Chen | September 2017)0

Posted by Admin in on December 5, 2017

ABOUT A WEEK ago, Stanford University researchers posted online a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training it with tens of thousands of photographs from dating sites, the algorithm could perform better than a human judge in specific instances. For example, when given photographs of a gay white man and a straight white man taken from dating sites, the algorithm could guess which one was gay more accurately than actual people participating in the study.* The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

The increasing number of projects such as this highlight the degree to which AI research can be an unanticipated source of harm. Currently such work is unlikely to be submitted for research ethics review and the researchers will probably be unfamiliar with ethical considerations. Existing research ethics committees and institutional arrangements will be ill equipped to be helpful, but some kind of framework is required to provide researchers feedback on the ethics of their work.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.”Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Read the rest of this discussion piece

US court issues injunction against OMICS to stop “deceptive practices” – Retraction Watch (Andrew P. Han | November 2017)0

Posted by Admin in on November 27, 2017

A US government agency has won an initial court ruling against OMICS, which the government says will help stop the academic publisher’s deceptive business practices.

Today, the Federal Trade Commission (FTC) announced that it won a preliminary injunction in September in its lawsuit against Srinubabu Gedela, CEO of OMICS Group and other companies.

The lawsuit, filed in August 2016, accused the defendants — which include Gedela and OMICS Group, iMedPub, and Conference Series — of deceptive business practices related to journal publishing and scientific conferences. The FTC alleged the defendants used the names of prominent researchers to draw conference attendees, even though the researchers had not agreed to participate; misled readers about whether articles had been peer reviewed; failed to provide authors with transparent information about publishing fees prior to submission; and presented misleading “impact factors” for journals.

Read the rest of this news story
Read the short Inside Higher Ed Quick Take 
FTC Charges Academic Journal Publisher OMICS Group Deceived Researchers

Communicating risk in human terms – The Ethics Blog (Pär Segerdahl | October 2017)0

Posted by Admin in on November 26, 2017

The concept of risk used in genetics is a technical term. For the specialist, risk is the probability of an undesired event, for example, that an individual develops some form of cancer. Risk is usually stated as a percentage.

This discussion piece raises an important issue for describing risk to participants across all (sub)disciplines. Talking in percentages is unlikely to be meaningful to most potential participants.

It is well known that patients have difficulties to access the probability notion of risk. What do their difficulties mean?

Technical notions, which experts use in their specialist fields, usually have high status. The attitude is: this is what risk really is. Based on such an attitude, people’s difficulties mean: they have difficulties to understand risk. Therefore, we have to help them understand, by using educational tools that explain to them what we mean (we who know what risk is).

Read the rest of this discussion piece

Towards a more transparent and collaborative review process – Crosstalk (Milka Kostic | September 2017)0

Posted by Admin in on November 25, 2017

Transparency in peer review is the theme of Peer Review Week 2017, which starts today. Cell Press is taking part in this week’s activities by highlighting some of the things we’ve been doing to increase the transparency in peer review for our authors, reviewers, and readers.

Peer review is collaboration. Although the traditional peer review process may seem rigid and linear—with authors submitting a paper, editors inviting reviewers, reviewers submitting the comments, and editors making a decision—in practice, Cell Press editors often engage in extensive discussions with reviewers after we’ve received all the comments. This helps us understand better where reviewers are coming from, and formulate the most appropriate course of revisions for the authors. Reviewer cross-consultation has been an informal feature of our approach to peer review for some time now.

Several years ago, we decided to start experimenting with making the collaborative peer review more structured and systematic. The first round of innovation in this area took place in 2014, and we’ve continued to build from those early results, which indicated that making the peer review process more collaborative has benefits and values for everyone involved in and, ultimately, the published science.

Read the rest of this discussion piece