Saying ‘no’ to this kind of visual content is a question of research integrity, consent, privacy and intellectual-property protection.
Should Nature allow generative artificial intelligence (AI) to be used in the creation of images and videos? This journal has been discussing, debating and consulting on this question for several months following the explosion of content created using generative AI tools such as ChatGPT and Midjourney, and the rapid increase in these platforms’ capabilities.
Given the degree to which artificial intelligence can create credible images, fabricate and distal images, and replicate existing work without attribution, the position taken by Nature is understandable. But as technology improves, search images are likely to be increasingly hard to detect, and require an honest self-disclosure or a public interest disclosure to identify. We may already be in the time when such images are making it to publication irrespective of this policy.
Artists, filmmakers, illustrators and photographers whom we commission and work with will be asked to confirm that none of the work they submit has been generated or augmented using generative AI (see go.nature.com/3c5vrtm).
Why are we disallowing the use of generative AI in visual content? Ultimately, it is a question of integrity. The process of publishing — as far as both science and art are concerned — is underpinned by a shared commitment to integrity. That includes transparency. As researchers, editors and publishers, we all need to know the sources of data and images, so that these can be verified as accurate and true. Existing generative AI tools do not provide access to their sources so that such verification can happen.
Then there’s attribution: when existing work is used or cited, it must be attributed. This is a core principle of science and art, and generative AI tools do not conform to this expectation.