A few days back, I got an email from TPM Reader JL asking me not to give in to the Luddite or reflexively anti-AI tendency he sensed I might have. It was a very interesting note and led to an interesting exchange, because JL is far from an AI maximalist or promoter and our views ended up not being that far apart. I explained at greater length that my general skepticism toward AI is based on four interrelated points.
The first is that even very positive technological revolutions (say, the Industrial Revolution) end up hurting a lot of people. Second, this revolution is coming to us under the guidance and ownership of tech billionaires who are increasingly wedded to and driven by predatory and illiberal ideologies. Both those facts make me think that we should approach every new AI development from a posture of skepticism, even if some or most may end up being positive. Trust but verify and all that. Point three is closely related to point two: AI is being built, even more than most of us realize, by consuming everyone else’s creative work with no compensation. It’s less “thought” than more and more refined statistical associations between different words and word patterns. And that’s to build products that will be privately owned and sold back to us. Again, predatory and illiberal … in important ways likely illegal.
My fourth point is a bit different. It’s less skepticism in the sense expressed above than a real question about what AI even is and the ideas of the people creating it. Let me explain this one in a bit more detail. If you read the business and political press, you will frequently hear that AI could reach “general intelligence” or even “the singularity” within the next year or so. If you listen to AI’s creators and promoters, you will also hear that AI achieving “consciousness” or “self-awareness” is something that either will happen or may happen in the near future. And the argument is usually that at a certain level of computational intensity these programs will become conscious. Where that line is is uncertain, they say. And just how the threshold is crossed is uncertain. But, fundamentally, it’s a quantitative threshold, a mix of hardware power and heuristic subtlety.
That basic assumption strikes me as absurd. And the assumption is so prevalent and so absurd that it makes me question just what the people behind AI think they’re doing and what exactly they think “thought” actually is. This admittedly starts to get pretty abstract. And maybe it doesn’t have much to do with how many people will lose their jobs over the next five years. But maybe it does. Because this is where my not-very-knowledgable but intuitive reactions to AI meet up with JL‘s. (And to give you a bit of context, JL doesn’t work with AI but is an academic in an adjacent field.) JL’s general point is that AI is pretty powerful, will probably help us do a lot of interesting and even important things, but that its ability to operate without human oversight, even on fairly basic tasks and especially on ones on which it hasn’t been specifically trained, remains quite limited. JL thinks that what we’re likely to see any time in the near future is more magnifying what individual humans can do than replacing them. JL is also skeptical that the future of AI is really to be found in the super-resource-intensive AI we know now from Silicon Valley. They pointed to a team based in Oxford, UK, which figured out how to run a modern AI model on a computer from the late 1990s running on a Pentium processor.
I share all of this to give some sense of my current thoughts about AI, but also because it informed my reaction to the news over the weekend about RFK Jr.’s HHS having a much ballyhooed report apparently produced by artificial intelligence (NOTUS got the goods on this one) with the standard mix of botched citations and non-existent publications cited as sources. There’s actually a lawyer who is compiling a database of legal briefs which have been caught so far with fake citations apparently produced by using AI.
How often is this happening? It’s important to remember that the RFK report example has a particular context. It’s hard to imagine any document getting more scrutiny than a report purporting to produce scientifically rigorous backing for all of RFK’s different … well, bullshit. And it wasn’t one of the bigs that found the problems. It was NOTUS, a small, newcomer publication. I doubt they were looking for evidence that AI was involved. I strongly suspect they were trying to see what the arguments were and what the evidence was. So how many other examples are there of this? I suspect there are quite a few.
Who takes the time to really dig into most reports? Who actually tracks down every individual citation? This is even more the case when — so important to remember this — you remember that AI’s “thought” process is based on producing answers that appear valid and credible to humans. So the fakes will be packaged in unremarkable and inconspicuous ways. How many legal briefs are getting produced like this?
We all know and most of us have experienced the way AI is already clogging up a lot of the internet. But that’s stuff meant to get picked up by search engines — write ups on how to change a tire, the net worth of Brad Pitt, recipes. Sort of who cares? But there’s a lot of evidence that AI work product is creeping into “important” areas too. So we’re already living in the AI age but what it’s doing is less replacing humans or operating at a level comparable to humans but seeding the information world with a new generation of slop, superficially credible but falsity-seeded content.
This is far from a novel insight. It’s one of the basic critiques of AI. It’s also true that it’s easy but misleading to dismiss a technology based on its earliest iterations. 1990s era Internet video was basically a joke. But eventually there was Netflix and cord cutting. Things change. But this does seem to be where we are right now with AI. And for the moment the question seems to be less what AI can do that is at the level of humans or exceeding their abilities as how close it can get and how tempting its use is for the purposes of productivity gains and cost savings.
In a way, we as a global society are already in the grip of a primitive form of machine learning, the kind that is designed to maximize engagement on social media platforms. They’re very primitive by the standards of AI. But they’ve already upended our society in basic and fundamental ways. And despite our knowledge of the upending, we’re basically incapable of freeing ourselves from them.
So like I said, a posture of consistent skepticism.