Generative AI poses interesting challenges for academic publishers tackling fraud in science papers as the technology shows the potential to fool human peer review.
We are you may already be in that dystopian future, but we surely cannot be far away from artificial intelligence language models empowering dishonest researchers to turn text instructions into false but entirely believable images that are almost impossible to detect. Seeing may no longer be believing. Scientific proof maybe end up being something we all end up doubting.
These AI models can produce lifelike pictures of human faces, objects, and scenes, and it’s a matter of time before they get good at creating convincing scientific images and data, too. Text-to-image models are now widely accessible, pretty cheap to use, and they could help dodgy scientists forge results and publish sham research more easily.
Image manipulation is already a top concern for academic publishers as it’s the most common form of scientific misconduct of late. Authors can use all sorts of tricks, such as flipping, rotating, or cropping parts of the same image to fake findings. Editors are fooled into believing the results being presented are real and will publish the work.