Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Animal ethics
Animal Ethics Committee
Animal handling
Animal housing
Animal Research Ethics
Animal Welfare
ANZCCART
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Euthanasia
Evaluative practice/quality assurance
Even though i
First People
Fraud
Gender
Genetics
Get off Gary Play man of the dog
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
se
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
x
Young people
Exclude news

Sort by

Animal Ethics Biosafety Human Research Ethics Research Integrity

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use – Nature (January 2023)

Posted by Connar Allen in Research Integrity on February 8, 2023
Keywords: Authorship, Journal, Research integrity, Research Misconduct, Research results

The Linked Original Item was Posted On January 24, 2023

AI(Artificial Intelligence) concept. deep learning.

As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.

It has been clear for several years that artificial intelligence (AI) is gaining the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by people. Last year, Nature reported that some scientists were already using chatbots as research assistants — to help organize their thinking, generate feedback on their work, assist with writing code and summarize research literature (Nature 611, 192–193; 2022).

First of all, let’s set aside the science-fiction nightmares of machines taking over the world and the hyperbole around ChatGPT.  The NLP and AI systems that are currently available do not possess general AI (AGI).  But they are becoming increasingly hard to spot and they produce their output by harvesting text from around the web, without attribution.  It is dishonest for a researcher to use systems such as ChatGPT to produce an output and claim it as their own work. Without careful review, editing and paraphrasing, the resulting text is likely to have serious errors, plagiarism and be impossible for a future researcher to replicate.

But the release of the AI chatbot ChatGPT in November has brought the capabilities of such tools, known as large language models (LLMs), to a mass audience. Its developers, OpenAI in San Francisco, California, have made the chatbot free to use and easily accessible for people who don’t have technical expertise. Millions are using it, and the result has been an explosion of fun and sometimes frightening writing experiments that have turbocharged the growing excitement and consternation about these tools.

ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them. Worryingly for society, it could also make spam, ransomware and other malicious outputs easier to produce. Although OpenAI has tried to put guard rails on what the chatbot will do, users are already finding ways around them.

The big worry in the research community is that students and scientists could deceitfully pass off LLM-written text as their own, or use LLMs in a simplistic fashion (such as to conduct an incomplete literature review) and produce work that is unreliable. Several preprints and published articles have already credited ChatGPT with formal authorship.

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use
As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.

Related Reading

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation (Brian Lucy & Michael Dowling | January 2023)

Science journals ban listing of ChatGPT as co-author on papers – The Guardian (Ian Sample | January 2023)

CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism – Futurism (Jon Christian | January 2023)

Abstracts written by ChatGPT fool scientists – Nature (Holly Else | January 2023)

ChatGPT listed as author on research papers: many scientists disapprove – Nature (Chris Stokel-Walker | January 2023)

AI and Scholarly Publishing: A View from Three Experts – The Scholarly Kitchen (Anita De Waard | January 2023)

Scientists, please don’t let your chatbots grow up to be co-authors – Substack (Gary Marcus | January 2023)

Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers (Papers: Catherine A. Gao et. al. | December 2022)

AI et al.: Machines Are About to Change Scientific Publishing Forever – ACS Publications (Gianluca Grimaldi & Bruno Ehrler | January 2023)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related Links

Complaints against Research Ethics Monthly

Request a Takedown

Submission Guidelines

About the Research Ethics Monthly

About subscribing to the Research Ethics Monthly

A diverse group discussing a topic

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Enter the answer as a word
  • Hidden
    This field is hidden and only used for import to Mailchimp
  • This field is for validation purposes and should be left unchanged.
  • Home
  • Services
  • About Us
  • Contact Us
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Site Map
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f Twitter Linkedin-in