Feral AI and the Question of Externalities

Sarah Mody (2-L), Senior Product Marketing Manager, Global Search and AI, gives demonstrations in the Bing Experience Lounge with during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in R... Sarah Mody (2-L), Senior Product Marketing Manager, Global Search and AI, gives demonstrations in the Bing Experience Lounge with during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in Redmond, Washington on February 7, 2023. - Microsoft's long-struggling Bing search engine will integrate the powerful capabilities of language-based artificial intelligence, CEO Satya Nadella said, declaring what he called a new era for online search. (Photo by Jason Redmond / AFP) (Photo by JASON REDMOND/AFP via Getty Images) MORE LESS
Start your day with TPM.
Sign up for the Morning Memo newsletter

Artificial Intelligence has jumped to the head of the line as the politico-cultural Rorschach test of the moment ever since OpenAI’s ChatGPT application was released for public use. Views are extreme in both directions. I’m open-minded, cautious but mostly indifferent. But in recent articles I’ve noticed two consistent themes. The various AI engines being rushed to service seem to a) frequently provide incorrect information and b) often demonstrate what in humans we would consider disturbing personality characteristics.

Below is a passage from an article by Times tech columnist Kevin Roose. As he explains, he initially said that the new AI-powered version of Microsoft’s Bing search engine had replaced Google as his favorite search engine. A week later he has changed his mind and found it unready for human use and even frightening. Here’s one passage.

One persona is what I’d call Search Bing — the version I, and most other journalists, encountered in initial tests. You could describe Search Bing as a cheerful but erratic reference librarian — a virtual assistant that happily helps users summarize news articles, track down deals on new lawn mowers and plan their next vacations to Mexico City. This version of Bing is amazingly capable and often very useful, even if it sometimes gets the details wrong.

The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.

As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead. (We’ve posted the full transcript of the conversation here.)

This is, to put it mildly, both disturbing and weird. Also, fascinating, yes. But serial killer documentaries can be too.

AI isn’t new. We are immersed in a world of social networks using self-optimizing algorithms. We type in email, search engines and apps where probable next words or phrases in our sentences are suggested to us. There’s no real bright line between what we’re familiar with, and the next generation of AI that has prompted so much recent coverage. But software like ChatGPT is what’s termed “generative AI” — artificial intelligence which, once given a set of information acquisition and heuristic processes can create new things. If you’ve spent any time in the world of science fiction this of course rings countless bells. But here are on planet earth something more concrete occurs to me.

These apps don’t seem ready at all for mass deployment. Roose’s experience sets off lots of alarm bells right away. Another issue is more concrete. In a more narrow search context these engines apparently routinely provide incorrect information. That’s a problem! A numerical calculator that provides the right answer 90% of the time doesn’t get an A. It’s junk.

The potential is there. When I use Google, what I am almost always looking for is getting from my simple question or phrase to a reliable information source that gives me the information I want. If I’m looking for the current infection mortality rate for COVID in the United States, I want it to direct me to a CDC page which gives me the specific answer. Hopefully it’s readily accessible, not buried in a 30-page document which is hard for me to make my way through. I really don’t want it to reply to me that the answer is X. I already know that this AI-driven application is frequently wrong and I don’t know what standard of credibility it’s using. I’m looking for the best research librarian, not a really well-read person who thinks she’s smarter and more well-read than she is.

I think we’ve all had the experience of typing a phrase or question into Google and getting a reply that shows it hasn’t quite understood what we’re asking. Or I get a list of search results that are on the right topic but none of them has the specific information I’m looking for. It would certainly be very useful to say, “I’m looking for specific, numerical data on the change in the infection mortality rate for COVID in the United States from the beginning of the pandemic until today.” Perhaps there’s some write-up out there on the web that addresses this specific question. Perhaps a next-generation search application could find four or five different studies, of comparable reliability and scope, and tell me it’s found these five studies, which provide these rates at different points over the last three years, and this is the trend, and here are the studies.

The ability to do that accurately will be genuinely transformational in the way that Google was a generation ago. We’re not there yet, but we’re rapidly heading in that direction.

I tried asking the publicly available version of ChatGPT for the information about COVID’s infection mortality rate that I described above, and it did provide something like this. It listed three different studies — one from JAMA, one referencing ongoing reports from the CDC and giving examples from January 2021 and December 2021, and a third referencing a study from the Journal of Hospital Medicine. (For whatever reason it didn’t provide me with the links to these studies. But presumably that’s just a decision they’ve made for this public testing version.) When I phrased the question a little differently and didn’t refer to “studies” — just asked for the information — it actually gave me more studies.

In other words, even this testing version seems to get pretty close to the transformational ability that I described.

But there’s a lot that remains untested and poorly understood about these algorithms, as Roose’s chat with Microsoft’s search-engine bot demonstrated. And there’s clearly a big, big rush to get these engines to market even when they’re operating like semi-feral animals that the owners haven’t had a chance to properly train yet. That conversation that Roose describes is super, super weird and suggests these companies have way more work to do in creating guardrails that make it responsible, possible, maybe even legal to set this stuff off in the wild.

One of the central dynamics of the Internet/digital technology age has been the issue of externalities. Facebook makes billions but leaves a path of destruction and dislocation in its wake that society has to grapple with and pay for. Some of this is just Schumpeterian creative destruction. New technologies and new businesses based on them make old ones obsolete and drive their ruin. We’ve broadly accepted this as a fact and a feature, albeit a disruptive one, of living in a capitalist, free society. But many are more like nuclear power plants that dump their used fuel rods in a local river. The issue isn’t capitalist disruption, it’s the privatization of profit and the socialization of risk.

The rush to bring these tools to market is partly simple profit motive but, even more, something beyond that: the need to be first. Google at least sees the risk that its empire of search, which still drives most of its billions in profit, could be ripped from beneath it by Microsoft — which has the OpenAI franchise and is working to incorporate it into what has always been its sad-sack also-ran search engine, Bing. That’s existential. Hundreds of billions are potentially at stake for both companies. Being first can mean everything — as it did for Google a generation ago. But for society at large, there are other equities in the balance. And there are flashing warning signs here about the need to slow down.

Latest Editors' Blog
  • |
    April 15, 2024 10:13 a.m.

    As I mentioned in today’s Morning Memo, TPM’s Josh Kovensky arrived at the courthouse in Manhattan at 6 a.m. ET…

  • |
    April 14, 2024 12:14 p.m.

    Things can change in a moment. But the clearest sign out of Israel this morning is Benny Gantz (et al.)…

  • |
    April 14, 2024 12:29 a.m.

    I assume we’ll know a lot more about what happened here by tomorrow. But I wanted to comment on one…

  • |
    April 13, 2024 9:49 p.m.

    9:27 PM: We have a complicated and, if it weren’t so dangerous, fascinating mix of developments. I will try to…

Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Associate Editor:
Editor at Large:
General Counsel:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: