Meta trained an AI on over 48 million science papers, shut it down after 2 days

On November 15, Meta AI and Papers with Code released Galactica, an open-source big language model for science. Their main goal was to deal with the abundance of scientific data. “Store, combine, and reason about scientific knowledge,” according to Galactica. Sadly, the online tool had several issues and was removed in just two days. Given […]

breezyscroll

November 21, 2022

Technology

3 min

zeenews

Meta trained an AI on 48 million science papers, shut it down after 2 days

On November 15, Meta AI and Papers with Code released Galactica, an open-source big language model for science. Their main goal was to deal with the abundance of scientific data. “Store, combine, and reason about scientific knowledge,” according to Galactica. Sadly, the online tool had several issues and was removed in just two days.

Given that Galactica purported to arrange scientific material, it may be a huge benefit to researchers. Although it claimed to have a “high-quality and highly curated algorithm,” this was not the case. The search engine produced inaccurate results, not ones that “benefit the scientific community.” The inaccuracy of Meta’s AI model raises questions about the efficiency of platforms driven by AI, particularly in the scientific field.

Galactica was trained on over 48 million scientific papers

As per the official website, Galactica was a “powerful large language model (LLM) trained on over 48 million papers, textbooks, reference material, compounds, proteins, and other sources of scientific knowledge. You can use it to explore the literature, ask scientific questions, write scientific code, and much more.” The site also said Galactica outperformed GPT-3, one of the most popular LLMs, by 68.2%.

It provided incorrect answers and was unable to conduct math operations at the kindergarten level

It provided incorrect answers and was unable to conduct math operations at the kindergarten level. Galactica mispronounced Julian Togelius’ name and failed to give an overview of his work when asked to sum up his job as an associate professor at NYU. Gary Marcus, a psychology professor at NYU, asserts that Galactica misrepresented him in 85% of the cases.

Wikipedia’s entry on “Hanlon’s razor” states something different

Galactica’s inability to produce a Wiki entry is demonstrated by Bergstrom’s tweet.” Hanlon’s razor is an adage or rule of thumb that states never attribute to malice that which is adequately explained by stupidity,” reads the official Wikipedia article on ‘Hanlon’s razor.’ “It is a philosophical razor that suggests a way of eliminating unlikely explanations for human behavior.”

“Outputs may be unreliable. Language Models are prone to hallucinate text”

On the official website, there is a bold warning that reads: “Never follow advice from a language model without verification.” Galactica even displayed the disclaimer “Outputs may be unreliable. Language Models are prone to hallucinate text,” along with every result.

This article Meta trained an AI on over 48 million science papers, shut it down after 2 days appeared first on BreezyScroll.

Read more on BreezyScroll.

Related Topics

Related News

More Loader