Feeds
- Home
- >
- Feeds
AHRECS agrees with this move by academic publishers. As we understand it the artificial intelligence systems currently available, such as ChatGPT are not general artificial intelligence (AGI). We have observed recently, the current systems do not genuinely understand its interaction with humans or even the text that it produces. As such, it can not take responsibility for what it produces, nor can it be held accountable when it breaches responsible research standards (such as reusing the text written by others without attribution). Institutions must also provide useful guidance in this space. AHRECS has published a foundation guidance document relating to ChatGPT and research outputs to our subscribers' (ahrecs.vip). This document is Creative Commons, Attribution. A subscription costs an institution $350 per year
This excellent move in the Netherlands, funding bodies and institutions in other jurisdictions should emulate. The Dutch helpline recognises the researchers who are trolled endure mental suffering and often well-founded safety concerns. Providing the helpline also recognises that important research is not conducted if researchers worry that they will be targetted by trolls.
This story and the details it alleges highlight the harm that can be done to responsible research practice when the utterances of a populist leader influence the decisions of a country's peak research funding body. Politics must not be allowed to infect research.
The relationship between study teams, recruiters and research sites, can sometimes feel like dating or a one night stand, where the interest of the pursuer wanes once they have got what they want. It can leave the site feeling like the situation depicted this Don Mayne cartoon.
This piece and the alleged misuse of ChatGPT is a 'good' demonstration that this natural language processing (NLP) service should not be used unedited for outputs of any kind. It really should only be used as a tool to create an early draft of any item, to be carefully reviewed and edited by a human. The reputation of institutions and publications can be seriously tarnished by the careless use of this technology.
Continuing with our recent discussion about ChatGPT, Natural Language Processing (NLP) and artificial intelligence in research outputs, this piece looks at the degree to which abstracts written by a machine are fooling academics into believing that they were written by humans. The process of the development of this technology is startling. Research institutions, publishers, funding bodies and learned societies need to establish policies, and guidance material and conduct professional development in this area. AHRECS is currently working on a guidance document that we will post to the subscribers' area in the next few days.