Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Animal ethics
Animal Ethics Committee
Animal handling
Animal housing
Animal Research Ethics
Animal Welfare
ANZCCART
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Euthanasia
Evaluative practice/quality assurance
Even though i
First People
Fraud
Gender
Genetics
Get off Gary Play man of the dog
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
se
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
x
Young people
Exclude news

Sort by

Animal Ethics Biosafety Human Research Ethics Research Integrity

What Chatbot Bloopers Reveal About the Future of AI – WIRED (Will Knight | February 2023)

Posted by Connar Allen in Research Integrity on March 20, 2023
Keywords: Authorship, Publication ethics, Researcher responsibilities

The Linked Original Item was Posted On February 16, 2023

A graphic representation of an android's head.

Microsoft’s new chatbot for Bing has displayed some strange behavior, proving that AI is more fallible than tech companies let on

WHAT A DIFFERENCE seven days makes in the world of generative AI.

Despite all the current enthusiasm about the uncanny capability of ChatGPT and other large language models, ignoring the hyperbole coming from tech CEOs, we need to take a sober pause and take a hard look at what these bots are getting wrong. They are hallucinating incorrect answers and insisting that they are fact. They are grabbing the worst utterances of trolls and bigots and amplifying them. They are harvesting this existing text from the internet without attributing them to the original source. Researchers should be very wary of using the technology without doing a careful review, editing and rewriting. Institutions should be providing guidance on these matters.  AHRECS has produced a foundation document for your institution’s guidance material. It can be accessed from https://www.ahrecs.vip as part of a $350 annual subscription.

Last week Satya Nadella, Microsoft’s CEO, was gleefully telling the world that the new AI-infused Bing search engine would “make Google dance” by challenging its long-standing dominance in web search.

The new Bing uses a little thing called ChatGPT—you may have heard of it—which represents a significant leap in computers’ ability to handle language. Thanks to advances in machine learning, it essentially figured out for itself how to answer all kinds of questions by gobbling up trillions of lines of text, much of it scraped from the web.

Google did, in fact, dance to Satya’s tune by announcing Bard, its answer to ChatGPT, and promising to use the technology in its own search results. Baidu, China’s biggest search engine, said it was working on similar technology.

But Nadella might want to watch where his company’s fancy footwork is taking it.

In demos Microsoft gave last week, Bing seemed capable of using ChatGPT to offer complex and comprehensive answers to queries. It came up with an itinerary for a trip to Mexico City, generated financial summaries, offered product recommendations that collated information from numerous reviews, and offered advice on whether an item of furniture would fit into a minivan by comparing dimensions posted online.

What Chatbot Bloopers Reveal About the Future of AI
Microsoft’s new chatbot for Bing has displayed some strange behavior, proving that AI is more fallible than tech companies let on.

SUBSCRIPTION REQUIRED

Related Reading

Turnitin announces AI detector with ‘97 per cent accuracy’ – Times Higher Education (Tom Williams | February 2023)

As scientists explore AI-written text, journals hammer out policies – Science (Jeffrey Brainard | February 2023)

A.I. Like ChatGPT Is Revealing the Insidious Disease at the Heart of Our Scientific Process – Slate (Charles Seife | January 2023)

What ChatGPT and generative AI mean for science – Nature (Chris Stokel-Walker & Richard Van Noorden | February 2023)

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge (Papers: Annette Flanagin et al. | January 2023)

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use – Nature (January 2023)

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation (Brian Lucy & Michael Dowling | January 2023)

Science journals ban listing of ChatGPT as co-author on papers – The Guardian (Ian Sample | January 2023)

CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism – Futurism (Jon Christian | January 2023)

Abstracts written by ChatGPT fool scientists – Nature (Holly Else | January 2023)

ChatGPT listed as author on research papers: many scientists disapprove – Nature (Chris Stokel-Walker | January 2023)

AI and Scholarly Publishing: A View from Three Experts – The Scholarly Kitchen (Anita De Waard | January 2023)

Scientists, please don’t let your chatbots grow up to be co-authors – Substack (Gary Marcus | January 2023)

Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers (Papers: Catherine A. Gao et. al. | December 2022)

AI et al.: Machines Are About to Change Scientific Publishing Forever – ACS Publications (Gianluca Grimaldi & Bruno Ehrler | January 2023)

AI paper mills and image generation require a co-ordinated response from academic publishers – LSE (Rebecca Lawrence & Sabina Alam | December 2022)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related Links

Complaints against Research Ethics Monthly

Request a Takedown

Submission Guidelines

About the Research Ethics Monthly

About subscribing to the Research Ethics Monthly

A diverse group discussing a topic

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Enter the answer as a word
  • Hidden
    This field is hidden and only used for import to Mailchimp
  • This field is for validation purposes and should be left unchanged.
  • Home
  • Services
  • About Us
  • Contact Us
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Site Map
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f Twitter Linkedin-in