No one will dispute that AI (Artificial Intelligence) needs to “eat” data, preferably in massive quantities, to develop. The better the data quality, the better the result. When thinking about the potential applications of AI in scholarly communications as related to research artifacts, how will that work? How might AI be trained on high quality, vetted information? How are the benefits and costs distributed?
The ‘chefs’ at Scholarly Kitchen reflect on the role artificial intelligence could play in scholarly communications. #SpoilerAlert, two things we need first are good and reliable data and steps to ensure deep biases in the current academic processes aren’t enshrined (and made invisible) in the inscrutable black box of code. We have included some links to a collection of related items.
Judy Luther: In scholarly communications there is an expanding body of openly available content from preprint servers, such as arXiv and bioRxiv, and Open Access journals and books. In addition, there is a growing variety of formats that include datasets and code, open peer review, media, and other elements of the scholarly research cycle. This volume of content provides a rich resource to be mined for all stakeholders as well as a broader audience.