Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Animal ethics
Animal Ethics Committee
Animal handling
Animal housing
Animal Research Ethics
Animal Welfare
ANZCCART
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Euthanasia
Evaluative practice/quality assurance
Even though i
First People
Fraud
Gender
Genetics
Get off Gary Play man of the dog
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
se
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
x
Young people
Exclude news

Sort by

Animal Ethics Biosafety Human Research Ethics Research Integrity

As scientists explore AI-written text, journals hammer out policies – Science (Jeffrey Brainard | February 2023)

Posted by Connar Allen in Research Integrity on March 1, 2023
Keywords: Authorship, Breaches, Institutional responsibilities, Journal, Publication ethics, Research Misconduct, Research results, Researcher responsibilities

The Linked Original Item was Posted On February 22, 2023

AI hand reaches towards a human hand as a spark of understanding technology reaches across to humanity. Artificial Intelligence concept copy space area. Blue cyborg arm and flare science background

Many ask authors to disclose use of ChatGPT and other generative artificial intelligence

“It’s all we’ve been talking about since November,” says Patrick Franzen, publishing director for SPIE, the international society for optics and photonics. He’s referring to ChatGPT, the artificial intelligence (AI)-powered chatbot unveiled that month. In response to a prompt, ChatGPT can spin out fluent and seemingly well-informed reports, essays—and scientific manuscripts. Worried about the ethics and accuracy of such content, Franzen and managers at other journals are scrambling to protect the scholarly literature from a potential flood of manuscripts written in whole or part by computer programs.

We have explained before why we believe large language tools like ChatGPT cannot be listed as an author of research output.  We have also explained why researchers need to be cautious about its use and be prepared to redraft the text it produces massively.  We have also commented on the need for researchers to acknowledge if they used the tool and the need for research institutions to classify the submission of unedited text by ChatGPT without acknowledgement as a serious form of research misconduct.  We have produced and uploaded the foundation for an institution’s guidance material on this subject two our patron’s area here.  Institutions can gain access to this area, this resource and the growing library of resources for AUD350 per year.

Some publishers have not yet formulated policies. Most of those that have avoid an outright ban on AI-generated text, but ask authors to disclose their use of the automated tools, as SPIE is likely to do. For now, editors and peer reviewers have few alternatives, as they lack enforcement tools. No software so far can consistently detect the synthetic text the majority of the time.

When the online tool ChatGPT was made available for free public use, scientists were among those who flocked to try it out. (ChatGPT’s creator, the U.S.-based company OpenAI, has since limited access to subscribers.) Many reported its unprecedented and uncanny ability to create plausible-sounding text, dense with seemingly factual detail. ChatGPT and its brethren—including Google’s Bard, unveiled earlier this month for select users, and Meta’s Galactica, which was briefly available for public use in November 2022—are AI algorithms called large language models, trained on vast numbers of text samples pulled from the internet. The software identifies patterns and relationships among words, which allows the models to generate relevant responses to questions and prompts.

In some cases, the resulting text is indistinguishable from what people would write. For example, researchers who read medical journal abstracts generated by ChatGPT failed to identify one-third of them as written by machine, according to a December 2022 preprint. AI developers are expected to create even more powerful versions, including ones trained specifically on scientific literature—a prospect that has sent a shock wave through the scholarly publishing industry.

As scientists explore AI-written text, journals hammer out policies
Many ask authors to disclose use of ChatGPT and other generative artificial intelligence

Related Reading

A.I. Like ChatGPT Is Revealing the Insidious Disease at the Heart of Our Scientific Process – Slate (Charles Seife | January 2023)

What ChatGPT and generative AI mean for science – Nature (Chris Stokel-Walker & Richard Van Noorden | February 2023)

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge (Papers: Annette Flanagin et al. | January 2023)

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use – Nature (January 2023)

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation (Brian Lucy & Michael Dowling | January 2023)

Science journals ban listing of ChatGPT as co-author on papers – The Guardian (Ian Sample | January 2023)

CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism – Futurism (Jon Christian | January 2023)

Abstracts written by ChatGPT fool scientists – Nature (Holly Else | January 2023)

ChatGPT listed as author on research papers: many scientists disapprove – Nature (Chris Stokel-Walker | January 2023)

AI and Scholarly Publishing: A View from Three Experts – The Scholarly Kitchen (Anita De Waard | January 2023)

Scientists, please don’t let your chatbots grow up to be co-authors – Substack (Gary Marcus | January 2023)

Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers (Papers: Catherine A. Gao et. al. | December 2022)

AI et al.: Machines Are About to Change Scientific Publishing Forever – ACS Publications (Gianluca Grimaldi & Bruno Ehrler | January 2023)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related Links

Complaints against Research Ethics Monthly

Request a Takedown

Submission Guidelines

About the Research Ethics Monthly

About subscribing to the Research Ethics Monthly

A diverse group discussing a topic

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Enter the answer as a word
  • Hidden
    This field is hidden and only used for import to Mailchimp
  • This field is for validation purposes and should be left unchanged.
  • Home
  • Services
  • About Us
  • Contact Us
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Site Map
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f Twitter Linkedin-in