More on AI

Start your day with TPM.
Sign up for the Morning Memo newsletter

From TPM Reader FP

I’ve worked for decades on language models, as a researcher in academia and industry, and as a research manager whose teams have brought language models into products several years before the current excitement. I’m enjoying your commentaries on the topic, so I’m writing with a bit of historical perspective and connections that might be helpful.

The connection between language models and games goes back a long way, arguably to Turing and Shannon. Consider a reader who is shown the words (or letters, the difference is not significant) one by one, from the books in a library, and has to bet on the next still unseen word. If the reader is sufficiently educated in English language and culture, they have an almost sure bet in continuing “… to be or not … ” However, if the text was “… classical concert goers prefer Beethoven to … ” there are several possible continuations, but there’s still some predictability: the following word is more likely to be “Stockhausen” than to be “Cheetos”: text from the library tends to have some thematic coherence, in this case musical preferences, rather than mixing music and snacks. 

A modern language model is trained on a very large library (say, the web) by adjusting its internal parameters in billions of rounds at that betting game: conceptually, the training algorithm places bets on possible next words, and adjusts its parameters according to how it did in the round. As it gets better at the game, its internal parameters form an increasingly detailed and predictive representation of what’s written in the library, ranging from recognizing common phrases to the implied rules of grammar to the common topic arrangements and lines of argumentation in the library texts. To generate some new text, the model starts with the user’s prompt, predicts the next word, then iteratively generates a next word to continue what it generated before.

Wearing your hats of historian and journalist, you can see where this is going: what gets represented is not necessarily what is true or just, but rather what the authors that were preserved in the library wanted to communicate for their own, possibly private and unstated, purposes. What is generated from that internal representation will tend to follow the well-trodden paths in the library.

The model training algorithm has no access to the underlying social purposes and processes that yielded the library. That so much language is about language (usage norms, explanations, paraphrases, summaries, …) infuses the model with surprising implicit knowledge about language use (“write a letter to the editor in Josh Marshall’s style”) but there are no guardrails stop the model from generating statistically reasonable but factually or socially incorrect outputs. The model does not know what words mean except to the extent — which is substantial but very incomplete — that those meanings constrain what goes with what in text.

Another way to say all of this is that these are not language models, they are (partial, passive) culture models more similar to what an archaeologist would create by examining the remains of a lost civilization than what an anthropologist develops by interacting with members of a living culture. What the companies fielding these models are trying to do now is to bridge the gulf between passive summarization and active learning through extensive human interaction, both with hired model “teachers” and end users. But the gulf is huge: so much of what makes life go is very poorly represented in the library.

Last, let me recommend two books that helped me organize my thoughts on this beyond the technicalities of language modeling: 

“The Enigma of Reason,” Hugo Mercier and Dan Sperber.

“Language vs Reality,” Nick Enfield.

Thank you for making and leading TPM, and continuing to write pieces that push us to think and learn more.

Latest Editors' Blog
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Associate Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: