Most Shared

OpenAI’s citation problem – The Atlantic

OpenAI’s citation problem – The Atlantic

Generative AI is very bad at telling users where it got its information.

Illustration by The Atlantic. Sources: csa-archive; kolotuschenko / Getty.

This is Atlantic Intelligence, a limited-run series in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

Technology companies have been eager to sell a vision of generative AI as the future of, well, everything. For instance: “We’re building these systems that are going to be everywhere—in your home, in your educational environment, in your work environment, and maybe, you know, when you’re having fun,” Mira Murati, OpenAI’s chief technology officer, told The Wall Street Journal late last year.

“These systems” are ultimately defined by how they present information. The magic of ChatGPT is that it speaks in humanlike language, owing to its ability to match and build upon patterns in the huge quantities of writing it’s been trained on. But don’t be fooled by the appearance of cogency: When asked to find specific bits of information or cite their sources, generative-AI programs struggle mightily.

In an investigation published in The Atlantic this week, my colleague Matteo Wong tried a range of searches with various AI tools to see how well they performed at providing citations. None of them was perfect. OpenAI’s GPT-4o was especially concerning, given that publishers have signed deals with the company that will allow their content to be used as training data for future iterations of the machine: “Sometimes links were missing, or went to the wrong page on the right site, or just didn’t take me anywhere at all. Frequently, the citations were to news aggregators or publications that had summarized journalism published originally by OpenAI partners such as The Atlantic and New York.” (The Atlantic has a corporate partnership with OpenAI. The editorial division of The Atlantic operates independently from the business division.)

Experts told Matteo that these problems might never be 100 percent fixed, despite promises that improvements are on the way. As generative AI spreads “everywhere,” we may find that it has done so at the expense of our ability to easily find good information on the web.


A book with two sticky tabs pointing in opposite directions
Illustration by Ben Kothe / The Atlantic. Sources: csa-archive; kolotuschenko / Getty.

Generative AI Can’t Cite Its Sources

By Matteo Wong

AI companies are envisioning a future in which their platforms are central to how all internet users find information. Among OpenAI’s promises is that, in the future, ChatGPT and other products will link and give credit—and drive readers—to media partners’ websites. In theory, OpenAI could improve readership at a time when other distribution channels—Facebook and Google, mainly—are cratering. But it is unclear whether OpenAI, Perplexity, or any other generative-AI company will be able to create products that consistently and accurately cite their sources—let alone drive any audiences to original sources such as news outlets. Currently, they struggle to do so with any consistency.

Read the full article.


What to Read Next


P.S.

If you, like me, find yourself occasionally unmoored from reality as the stranger questions about AI worm into your brain (or perhaps as you watch two presidential candidates sassing each other about their golf game), I highly recommend coming back down to Earth with my colleague Alan Taylor’s roundup of the photos of the week.

— Damon


Source link

Related Articles

Back to top button