Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Animal ethics
Animal Ethics Committee
Animal handling
Animal housing
Animal Research Ethics
Animal Welfare
ANZCCART
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Euthanasia
Evaluative practice/quality assurance
Even though i
First People
Fraud
Gender
Genetics
Get off Gary Play man of the dog
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
se
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
What was that say
x
Young people
Exclude news

Sort by

Animal Ethics Biosafety Human Research Ethics Research Integrity

The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts (Papers: Mohammad Hosseini et. al. | June 2023)

Posted by Connar Allen in Research Integrity on July 17, 2023
Keywords: Authorship, Journal, Publication ethics, Research results

The Linked Original Item was Posted On June, 15 2023 00:15:47

Businessman on blurred background using digital artificial intelligence interface 3D rendering

Abstract

We believe this open access paper, published in June 2023, presents a very cogent argument for why LLMs, ChatGPT should not be named as co-author or acknowledged as a contributor.  But it also argues against publishers directing authors not to use these tools.  Currently, there is a lot of hyperbole and hysteria about these systems.  What is required is a more sober reflection on the technology.  They are tools, no more smart than our use of them.  They cannot produce original content and cannot be held accountable for what they produce, but they can provide essential support for a range of researchers to distribute their ideas.

In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage undisclosed use of LLMs. Furthermore, LLMs can be useful in writing, reviewing and editing text, and promote equity in science. Others have argued that LLMs should be mentioned in the acknowledgments since they do not meet all the authorship criteria. We argue that naming LLMs as authors or mentioning them in the acknowledgments are both inappropriate forms of recognition because LLMs do not have free will and therefore cannot be held morally or legally responsible for what they do. Tools in general, and software in particular, are usually cited in-text, followed by being mentioned in the references. We provide suggestions to improve APA Style for referencing ChatGPT to specifically indicate the contributor who used LLMs (because interactions are stored on personal user accounts), the used version and model (because the same version could use different language models and generate dissimilar responses, e.g., ChatGPT May 12 Version GPT3.5 or GPT4), and the time of usage (because LLMs evolve fast and generate dissimilar responses over time). We recommend that researchers who use LLMs: (1) disclose their use in the introduction or methods section to transparently describe details such as used prompts and note which parts of the text are affected, (2) use in-text citations and references (to recognize their used applications and improve findability and indexing), and (3) record and submit their relevant interactions with LLMs as supplementary material or appendices.

Hosseini, M., Resnik, D. B., & Holmes, K. (2023). The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics, 0(0). https://doi.org/10.1177/17470161231180449
Publisher (Open Access): https://journals.sagepub.com/doi/10.1177/17470161231180449

Sage journal logo
The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts - Mohammad Hosseini, David B Resnik, Kristi Holmes, 2023
In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large…

Related Reading

Why Nature will not allow the use of generative AI in images and video – Nature (Editorial | June 2023)

Distinguishing academic science writing from humans or ChatGPT with over 99% accuracy using off-the-shelf machine learning tools (Papers: Heather Desaire et. al | June 2023)

(Spain) A researcher who publishes a study every two days reveals the darker side of science – El Pais (Manuel Ansede | June 2023)

Using artificial intelligence with academic integrity – Ethicsblog (Pär Segerdahl | June 2023)

AI intensifies fight against ‘paper mills’ that churn out fake research – Cell (Courtney Bricker-Anthony & Roland W. Herzog | May 2023)

Researchers embracing ChatGPT are like turkeys voting for Christmas – Times Higher Education (Dirk Lindebaum | May 2023)

Using AI in peer review – Research Professional News (Mohammad Hosseini & Serge Horbach | May 2023)

A Doctor Published Several Research Papers With Breakneck Speed. ChatGPT Wrote Them All – Daily Beast (Tony Ho Tran | May 2023)

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot – The Guardian (Anna Fazackerley | March 2023)

Academic Publishers Are Missing the Point on ChatGPT – The Scholarly Kitchen (Avi Staiman | March 2023)

What Chatbot Bloopers Reveal About the Future of AI – WIRED (Will Knight | February 2023)

Turnitin announces AI detector with ‘97 per cent accuracy’ – Times Higher Education (Tom Williams | February 2023)

As scientists explore AI-written text, journals hammer out policies – Science (Jeffrey Brainard | February 2023)

A.I. Like ChatGPT Is Revealing the Insidious Disease at the Heart of Our Scientific Process – Slate (Charles Seife | January 2023)

What ChatGPT and generative AI mean for science – Nature (Chris Stokel-Walker & Richard Van Noorden | February 2023)

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge (Papers: Annette Flanagin et al. | January 2023)

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use – Nature (January 2023)

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation (Brian Lucy & Michael Dowling | January 2023)

Science journals ban listing of ChatGPT as co-author on papers – The Guardian (Ian Sample | January 2023)

CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism – Futurism (Jon Christian | January 2023)

Abstracts written by ChatGPT fool scientists – Nature (Holly Else | January 2023)

ChatGPT listed as author on research papers: many scientists disapprove – Nature (Chris Stokel-Walker | January 2023)

AI and Scholarly Publishing: A View from Three Experts – The Scholarly Kitchen (Anita De Waard | January 2023)

Scientists, please don’t let your chatbots grow up to be co-authors – Substack (Gary Marcus | January 2023)

Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers (Papers: Catherine A. Gao et. al. | December 2022)

AI paper mills and image generation require a co-ordinated response from academic publishers – LSE (Rebecca Lawrence & Sabina Alam | December 2022)

Related Links

  • About the contributors
  • About the keywords
  • Suggest a resource
  • Report problem/broken link
  • Request a Take Down

Compiled here are links, downloads and other resources relating to research integrity and human research ethics. more…

Resources Menu

Four hands solving a jigsaw against the sun blazing out of a cloudy sky

Research Integrity

  • Codes, guidelines, policies and standards
  • Guidance and resource material
  • Papers
  • Books
  • Animal Ethics

Human Research Ethics

  • Codes, guidelines, policies and standards
  • Guidance and resource material
  • Papers
  • Books

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Enter the answer as a word
  • Hidden
    This field is hidden and only used for import to Mailchimp
  • This field is for validation purposes and should be left unchanged.
  • Home
  • Services
  • About Us
  • Contact Us
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Site Map
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f Twitter Linkedin-in