Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Animal ethics
Animal Ethics Committee
Animal handling
Animal housing
Animal Research Ethics
Animal Welfare
ANZCCART
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Euthanasia
Evaluative practice/quality assurance
Even though i
First People
Fraud
Gender
Genetics
Get off Gary Play man of the dog
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
se
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
x
Young people
Exclude news

Sort by

Animal Ethics Biosafety Human Research Ethics Research Integrity

What ChatGPT and generative AI mean for science – Nature (Chris Stokel-Walker & Richard Van Noorden | February 2023)

Posted by Connar Allen in Research Integrity on February 15, 2023
Keywords: Authorship, Journal, Publication ethics, Research results

The Linked Original Item was Posted On February 6, 2023

Code of Ethics in Technology as a Business Concept 3D Illustration Render

In December, computational biologists Casey Greene and Milton Pividori embarked on an unusual experiment: they asked an assistant who was not a scientist to help them improve three of their research papers. Their assiduous aide suggested revisions to sections of documents in seconds; each manuscript took about five minutes to review. In one biology manuscript, their helper even spotted a mistake in a reference to an equation. The trial didn’t always run smoothly, but the final manuscripts were easier to read — and the fees were modest, at less than US$0.50 per document.

We know that, of late, we have been including a few items in our newsfeed and resource library about ChatGPT.  The reality is that the new tool promises to completely change the academic writing and authorship landscape.  We continue to believe that the software cannot write an academic paper on its own that is publishable and adheres to responsible research practices.  But as this Nature piece discusses, right now, it is changing the writing process.  We recently produced a foundation document for institutional material about such software and research outputs.  It is available to our patrons from https://www.ahrecs.vip.   It costs $350 per year to become a patron.  Email us at patron@ahrecs.vip to discuss

This assistant, as Greene and Pividori reported in a preprint1 on 23 January, is not a person but an artificial-intelligence (AI) algorithm called GPT-3, first released in 2020. It is one of the much-hyped generative AI chatbot-style tools that can churn out convincingly fluent text, whether asked to produce prose, poetry, computer code or — as in the scientists’ case — to edit research papers (see ‘How an AI chatbot edits a manuscript’ at the end of this article).

The most famous of these tools, also known as large language models, or LLMs, is ChatGPT, a version of GPT-3 that shot to fame after its release in November last year because it was made free and easily accessible. Other generative AIs can produce images, or sounds.

“I’m really impressed,” says Pividori, who works at the University of Pennsylvania in Philadelphia. “This will help us be more productive as researchers.” Other scientists say they now regularly use LLMs not only to edit manuscripts, but also to help them write or check code and to brainstorm ideas. “I use LLMs every day now,” says Hafsteinn Einarsson, a computer scientist at the University of Iceland in Reykjavik. He started with GPT-3, but has since switched to ChatGPT, which helps him to write presentation slides, student exams and coursework problems, and to convert student theses into papers. “Many people are using it as a digital secretary or assistant,” he says.

LLMs form part of search engines, code-writing assistants and even a chatbot that negotiates with other companies’ chatbots to get better prices on products. ChatGPT’s creator, OpenAI in San Francisco, California, has announced a subscription service for $20 per month, promising faster response times and priority access to new features (although its trial version remains free). And tech giant Microsoft, which had already invested in OpenAI, announced a further investment in January, reported to be around $10 billion. LLMs are destined to be incorporated into general word- and data-processing software. Generative AI’s future ubiquity in society seems assured, especially because today’s tools represent the technology in its infancy.

What ChatGPT and generative AI mean for science
Researchers are excited but apprehensive about the latest advances in artificial intelligence.

Related Reading

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge (Papers: Annette Flanagin et al. | January 2023)

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use – Nature (January 2023)

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation (Brian Lucy & Michael Dowling | January 2023)

Science journals ban listing of ChatGPT as co-author on papers – The Guardian (Ian Sample | January 2023)

CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism – Futurism (Jon Christian | January 2023)

Abstracts written by ChatGPT fool scientists – Nature (Holly Else | January 2023)

ChatGPT listed as author on research papers: many scientists disapprove – Nature (Chris Stokel-Walker | January 2023)

AI and Scholarly Publishing: A View from Three Experts – The Scholarly Kitchen (Anita De Waard | January 2023)

Scientists, please don’t let your chatbots grow up to be co-authors – Substack (Gary Marcus | January 2023)

Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers (Papers: Catherine A. Gao et. al. | December 2022)

AI et al.: Machines Are About to Change Scientific Publishing Forever – ACS Publications (Gianluca Grimaldi & Bruno Ehrler | January 2023)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related Links

Complaints against Research Ethics Monthly

Request a Takedown

Submission Guidelines

About the Research Ethics Monthly

About subscribing to the Research Ethics Monthly

A diverse group discussing a topic

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Enter the answer as a word
  • Hidden
    This field is hidden and only used for import to Mailchimp
  • This field is for validation purposes and should be left unchanged.
  • Home
  • Services
  • About Us
  • Contact Us
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Site Map
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f Twitter Linkedin-in