Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Evaluative practice/quality assurance
First People
Fraud
Gender
Genetics
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
Young people
Exclude news

Sort by

Human Research Ethics Research Integrity

The battle for ethical AI at the world’s biggest machine-learning conference – Nature (Elizabeth Gibney | January 2020)

Posted by saviorteam in Human Research Ethics on February 9, 2020
Keywords: Artificial Intelligence, Beneficence, Ethical review, Good practice, Institutional responsibilities, Justice, Merit and integrity, Principles, Research ethics committees, Researcher responsibilities, Respect for persons
Abstract binary code tunnel and sink hole

Bias and the prospect of societal harm increasingly plague artificial-intelligence research — but it’s not clear who should be on the lookout for these problems.

Diversity and inclusion took centre stage at one of the world’s major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month’s Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics.

If your institution is involved in AI, algorithm or big data research, who advises on its ethical dimensions?   Given the potential for societal harm, perhaps it’s time for serious consideration of the need for research ethics review for such work.

The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies — such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. “There is no such thing as a neutral tech platform,” warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs. At the meeting, which hosted a record 13,000 attendees, researchers grappled with how to meaningfully address the ethical and societal implications of their work.
.

Ethics gap
Ethicists have long debated the impacts of AI and sought ways to use the technology for good, such as in health care. But researchers are now realizing that they need to embed ethics into the formulation of their research and understand the potential harms of algorithmic injustice, says Meredith Whittaker, an AI researcher at New York University and founder of the AI Now Institute, which seeks to understand the social implications of the technology. At the latest NeurIPS, researchers couldn’t “write, talk or think” about these systems without considering possible social harms, she says. “The question is, will the change in the conversation result in the structural change we need to actually ensure these systems don’t cause harm?”
.

Read the rest of this discussion piece

Related Reading

No related Resources found

Related Links

Complaints against Research Ethics Monthly

Request a Takedown

Submission Guidelines

About the Research Ethics Monthly

About subscribing to the Research Ethics Monthly

A diverse group discussing a topic

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Home
  • Services
  • About Us
  • Contact Us
Menu
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Disclaimer
Menu
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Disclaimer
  • Support
  • Contact Us
  • Site Map
Menu
  • Support
  • Contact Us
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f
Twitter
Linkedin-in