Does It Matter What AI ‘Knows’?

Sarah Mody (2-L), Senior Product Marketing Manager, Global Search and AI, gives demonstrations in the Bing Experience Lounge with during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in R... Sarah Mody (2-L), Senior Product Marketing Manager, Global Search and AI, gives demonstrations in the Bing Experience Lounge with during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in Redmond, Washington on February 7, 2023. - Microsoft's long-struggling Bing search engine will integrate the powerful capabilities of language-based artificial intelligence, CEO Satya Nadella said, declaring what he called a new era for online search. (Photo by Jason Redmond / AFP) (Photo by JASON REDMOND/AFP via Getty Images) MORE LESS
Start your day with TPM.
Sign up for the Morning Memo newsletter

I haven’t published so many reader replies in a while. But I’m doing so in this case because I find them very interesting and think some of you will too. But there’s a bit more than that. These discussions help me understand with more clarity some basic discussions we’re having as a society about artificial intelligence. They also help me line these discussions up with my own thoughts about the nature and utility of knowledge, the validation of theories by their ability to predict experimental results, and so on.

Take special and general relativity, the theoretical understanding of the nature of mass, energy, space, gravity and time that Albert Einstein developed just over 100 years ago. Is relativity true? Even today there are aspects of quantum theory that cannot be fully reconciled with relativity, which is fundamentally a classical theory and one about the macro rather than the subatomic world. So is relativity true?

Well, using its concepts and predictions physicists were able to build nuclear warheads which definitely produce mammoth detonations. Atomic clocks on long-duration space flights appear to slow down to the degree relativity predicts. While things get more complicated at the subatomic level, in case after case relativity accurately predicts what happens in the real world. Fundamentally, that is how we know relativity is “real” or “true”: it’s predictive ability. In the contingent nature of scientific knowledge it is real, true, accurate so long as it continues to have this predictive ability.

This gets a bit far afield from artificial intelligence. But some of these ways of thinking about scientific knowledge and predictive ability are clarifying for me on this topic too. One of the questions that comes up again and again in discussions of AI are how new “guardrails” or “fixes” can be built to deal with the fake answers and other weirdnesses that current AI produce. One group says, Look, we’ve built the car. The first cars didn’t have shock absorbers and seat belts and stuff. But we can add those. Those are just refinements. The other group says, That’s wishful thinking: When ChatGPT comes up with a completely fake but factitious answer, the problem goes well beyond being solved with tweaks. Its fundamental to a technology that has no way to know what’s real or not. TPM Reader FP is the reader who sent me the note in an earlier post. So I asked him, where are you on this spectrum of manageable refinements vs fundamental problems?

This was his response.

That’s indeed the key question for the field, with a wide array of proposed answers. Even among those who agree that current AI technology (basically, artificial neural networks organized and trained in a variety of ways) is in principle capable of solving the problem (and not everybody agrees with that, for instance Gary Marcus does not), we go from at one extreme folks like Ilya Sutskever (OpenAI technical co-founder), who appears to believe that just bigger networks and more (types of) data will do it, to at the other extreme Yann LeCun (Meta’s chief AI scientist, Turing award co-recipient for his contributions to AI) who has been recently arguing that the current AI recipe won’t be enough and some form of grounding in the physical world is needed. I fall somewhere in between: from decades of experience with language models, I’m well aware of where they can go wrong, independently of scale, but on the other hand I’ve been surprised by how far just scale has moved the needle in fluency and thematic coherence.

This in-between stance does not lend itself to headlines or tweet storms, but it’s what has made me push various projects on the problem of evidence: how can a language model find and deliver sound evidence for its generated claims? Notice that I’m talking here about evidence, not about what is real, and that comes from my basic belief in a social constructionist view of reality: reality is not arbitrary, being constrained by material and social relations, but it’s not accessible directly to us, only indirectly and fallibly through social processes of individual and collective experience, argumentation, and, whether we like it or not, power relations. Going back to language models, can they be constrained/taught to provide evidence for their claims in ways similar to what we demand of each other when arguing about each other’s claims? The two books I mentioned in my last email give rough sketches of what may be needed. In particular, I don’t think that a language model can solve this problem purely by scale, it will have to engage in argumentation and social learning with competing agents, whether those agents are humans or other AI systems. Going back to your points about games and AlphaGo, what I’m arguing for is that (self-)play within a suitable fitness landscape is the only way for a language model to become a cooperative, evidence-seeking interlocutor.

One thing this reminds me of is that the power, efficiency or utility of AI can only be properly understood in the context of defined use cases. For me, the only possible use of AI that I can think of for myself is to assist my research. Nate Silver tweeted this morning a copy of his interaction with ChatGPT in which he asked if a vice president had ever challenged the president they served under in a primary. ChatGPT answered that it happened when Walter Mondale challenged Jimmy Carter in 1980.

We’ll put that answer in the “needs more work” bucket.

Clearly if ChatGPT gives me detailed answers that may either be true or total fantasies, it’s totally useless. Indeed, even if it’s fake, a small percentage of the time it’s still useless for my purposes. But if I am understanding FP, the path forward may be to train ChatGPT to provide evidence for its claims. Interacting with human interlocutors it may become better able to predict what evidence a human will find sufficient to sustain a claim.

For me, I need to see a reference to a book or article or some similar kind of citation. That’s validation in my line of work. Indeed, for me the citation isn’t just validation — it’s what I’m looking for. I wouldn’t go to ChatGPT to tell me how the world works. I would use it to assist my research. So the citation and sources are what I’m looking for.

The relevant point here is that this is a potential path to keeping AI constrained to the real world, to privilege what is real over what is not, and thus become genuinely useful as a research tool. But notably, it requires human interaction. To me the relevant point is that it doesn’t really matter whether the AI engine has any idea of what it’s talking about, whether it “knows” anything. It simply matters that it can consistently and reliably make claims with evidence that a human (me) will judge to align with the real world.

Of course, a darker possibility is that it will simply get better at fooling me. But again, perhaps there are guardrails to deal with that possibility … he said with a wry grin.

Latest Editors' Blog
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Associate Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: