Galactica was supposed to help “organize science.” Instead, it spewed misinformation.
In the first year of the pandemic, science happened at light speed. More than 100,000 papers were published on COVID in those first 12 months — an unprecedented human effort that produced an unprecedented deluge of new information.
The final sentence in this piece is ‘in AI, moving fast and breaking things is risky — even irresponsible — and it could have real-world consequences. Galactica provides a neat case study in how things might go awry.’ This is a real-world of the consequences of deploying AI in the hopes that technology would do a better job than flawed and prejudiced human beings. In fact, the training of machines often looks at a massive body of decisions made by humans in the past. So rather than improving upon poor decisions previously made by humans, the machines will merely replicate the same flawed decision making into the future.
But, in theory, Galactica could.
Galactica is an artificial intelligence developed by Meta AI (formerly known as Facebook Artificial Intelligence Research) with the intention of using machine learning to “organize science.” It’s caused a bit of a stir since a demo version was released online last week, with critics suggesting it produced pseudoscience, was overhyped and not ready for public use.
The tool is pitched as a kind of evolution of the search engine but specifically for scientific literature. Upon Galactica’s launch, the Meta AI team said it can summarize areas of research, solve math problems and write scientific code.
At first, it seems like a clever way to synthesize and disseminate scientific knowledge. Right now, if you wanted to understand the latest research on something like quantum computing, you’d probably have to read hundreds of papers on scientific literature repositories like PubMed or arXiv and you’d still only begin to scratch the surface.