Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Animal ethics
Animal Ethics Committee
Animal handling
Animal housing
Animal Research Ethics
Animal Welfare
ANZCCART
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Euthanasia
Evaluative practice/quality assurance
Even though i
First People
Fraud
Gender
Genetics
Get off Gary Play man of the dog
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
se
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
What was that say
x
Young people
Exclude news

Sort by

Animal Ethics Biosafety Human Research Ethics Research Integrity

Are HIT-backed AI Research Integrity Solutions the Need of the Hour? – The Scholarly Kitchen (Minhaj Rais | August 2023)

Posted by Connar Allen in Research Integrity on September 5, 2023
Keywords: Authorship, Good practice, Institutional responsibilities, Journal, Publication ethics, Researcher responsibilities

The Linked Original Item was Posted On August 3, 2023

AI or Artificial intelligence word cloud

Ethan Mollick, Associate Professor at the Wharton School, recently concluded in conversation with the CEOs of Turnitin and GPTZero that, “There is no tool that can reliably detect ChatGPT-4/ Bing/ Bard writing. None!” Even if some of the AI-detection tools do develop the capability to detect AI writing, users don’t need sophisticated tools to pass ChatGPT detectors with flying colors — making minor changes to AI-generated text usually does the trick.

This thoughtful Scholarly Kitchen piece takes a look at how ChatGPT has penetrated scholarly writing and publications, the consequences and useful detection. What is required is this kind of sober reflection on the stakes and risks, as well as useful strategies to respond to the situation.

The fact that there are dozens of paraphrasing tools out there that can help users rephrase entire papers within seconds has provided further ammunition to bad actors looking to plagiarize and churn out fraudulent manuscripts at a faster pace. While the challenges for publishers battling ills such as plagiarism and paper mills continue to become more complicated, I posit that human intervention (human intelligence tasks aka HITs in MTurk parlance) coupled with the prowess of AI detection tools could present a viable way forward.

In this article, we explore how HITs and not simply more AI tools (to detect the use of generative AI tools) could be the way forward as a reliable and scalable solution for maintaining research integrity within the scholarly record.

ChatGPT has deeply penetrated the scholarly ecosystem

While ChatGPT continues to witness unprecedented usage, stakeholders across industry verticals are not only excited, but also worried about how it could impact businesses, education, creative (including research) output, and much more. There is an increasing realization that ChatGPT in itself is not going to be a “solve-all” tool; hallucinations, inherent biases, and the inability to assess the quality and validity of research are just some of the limitations that limit the free use of LLMs.

Guest Post — Are HIT-backed AI Research Integrity Solutions the Need of the Hour?
In this article, Minhaj Rain explores how human intelligence tasks (HITs) and not simply more AI tools could be the way forward as a reliable and scalable solution for maintaining research integrity within the scholarly record.

Related Reading

Artificial-intelligence search engines wrangle academic literature – Nature (Amanda Heidt | August 2023)

Publisher blacklists authors after preprint cites made-up studies – Retraction Watch (Ivan Oransky | April 2023)

ChatGPT can write a paper in an hour — but there are downsides – Nature (Gemma Conroy | July 2023)

The Intelligence Revolution: What’s Happening and What’s to Come in Generative AI – Scholarly Kitchen (Hong Zhou | July 2023)

Will AI liberate research from institutional bean-counting? – Times Higher Education (Martyn Hammersley | June 2023)

The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts (Papers: Mohammad Hosseini et. al. | June 2023)

Are Australian Research Council reports being written by ChatGPT? – The Guardian (Donna Lu | July 2023)

Why Nature will not allow the use of generative AI in images and video – Nature (Editorial | June 2023)

Distinguishing academic science writing from humans or ChatGPT with over 99% accuracy using off-the-shelf machine learning tools (Papers: Heather Desaire et. al | June 2023)

(Spain) A researcher who publishes a study every two days reveals the darker side of science – El Pais (Manuel Ansede | June 2023)

Using artificial intelligence with academic integrity – Ethicsblog (Pär Segerdahl | June 2023)

AI intensifies fight against ‘paper mills’ that churn out fake research – Cell (Courtney Bricker-Anthony & Roland W. Herzog | May 2023)

Researchers embracing ChatGPT are like turkeys voting for Christmas – Times Higher Education (Dirk Lindebaum | May 2023)

A Doctor Published Several Research Papers With Breakneck Speed. ChatGPT Wrote Them All – Daily Beast (Tony Ho Tran | May 2023)

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot – The Guardian (Anna Fazackerley | March 2023)

Academic Publishers Are Missing the Point on ChatGPT – The Scholarly Kitchen (Avi Staiman | March 2023)

What Chatbot Bloopers Reveal About the Future of AI – WIRED (Will Knight | February 2023)

As scientists explore AI-written text, journals hammer out policies – Science (Jeffrey Brainard | February 2023)

A.I. Like ChatGPT Is Revealing the Insidious Disease at the Heart of Our Scientific Process – Slate (Charles Seife | January 2023)

What ChatGPT and generative AI mean for science – Nature (Chris Stokel-Walker & Richard Van Noorden | February 2023)

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge (Papers: Annette Flanagin et al. | January 2023)

Using AI to write scholarly publications (Papers: Mohammad Hosseini et. al. | January 2023)

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use – Nature (January 2023)

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation (Brian Lucy & Michael Dowling | January 2023)

Science journals ban listing of ChatGPT as co-author on papers – The Guardian (Ian Sample | January 2023)

CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism – Futurism (Jon Christian | January 2023)

Abstracts written by ChatGPT fool scientists – Nature (Holly Else | January 2023)

ChatGPT listed as author on research papers: many scientists disapprove – Nature (Chris Stokel-Walker | January 2023)

AI and Scholarly Publishing: A View from Three Experts – The Scholarly Kitchen (Anita De Waard | January 2023)

Scientists, please don’t let your chatbots grow up to be co-authors – Substack (Gary Marcus | January 2023)

Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers (Papers: Catherine A. Gao et. al. | December 2022)

AI et al.: Machines Are About to Change Scientific Publishing Forever – ACS Publications (Gianluca Grimaldi & Bruno Ehrler | January 2023)

AI paper mills and image generation require a co-ordinated response from academic publishers – LSE (Rebecca Lawrence & Sabina Alam | December 2022)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related Links

Complaints against Research Ethics Monthly

Request a Takedown

Submission Guidelines

About the Research Ethics Monthly

About subscribing to the Research Ethics Monthly

A diverse group discussing a topic

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Enter the answer as a word
  • Hidden
    This field is hidden and only used for import to Mailchimp
  • This field is for validation purposes and should be left unchanged.
  • Home
  • Services
  • About Us
  • Contact Us
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Site Map
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f Twitter Linkedin-in