Some Thoughts on Generative AI

Sarah Mody (2-L), Senior Product Marketing Manager, Global Search and AI, gives demonstrations in the Bing Experience Lounge with during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in R... Sarah Mody (2-L), Senior Product Marketing Manager, Global Search and AI, gives demonstrations in the Bing Experience Lounge with during an event introducing a new AI-powered Microsoft Bing and Edge at Microsoft in Redmond, Washington on February 7, 2023. - Microsoft's long-struggling Bing search engine will integrate the powerful capabilities of language-based artificial intelligence, CEO Satya Nadella said, declaring what he called a new era for online search. (Photo by Jason Redmond / AFP) (Photo by JASON REDMOND/AFP via Getty Images) MORE LESS
Start your day with TPM.
Sign up for the Morning Memo newsletter

I wanted to share some thoughts on AI, artificial intelligence. The part of the discussion that has my attention is certainly not being overlooked. But it’s not at the center of the debate. It deserves more attention.

There are lots of different uses for so-called “generative AI.” But the kinds I’m particularly focused on are the ones used to create visual art based on textual prompts, write essays or even compose songs. This part of the discussion first got my attention a few months ago when an artist/illustrator friend of mine started talking about it on social media, how Silicon Valley’s latest disruption was set to put illustrators and artists — so often living on the financial margins already — out of business.

Now, job disruption isn’t new in this discussion. The fact that AI will put tons of people out of work is something that everyone talks about. But her discussion got me to focus on the fact that in these creative areas, what generative AI and LLMs are doing is going out and consuming all the existing art, or writing or musical compositions and learning how to create new works by absorbing all that information. Put more directly, that AI engine that creates the cool futuristic drawings of your face learned how to do that by consuming the work of thousands or millions of artists to learn how to produce the images that will now make the work of those same artists and illustrators superfluous and end their ability to make a living.

The same basic framework applies to the creation of written works and musical compositions. A Silicon Valley start up creates an AI engine which hoovers up all the art work currently on the web, learns how to replicate it and then displaces the humans who used to do it for a living. Critically, they use that art without permission or compensation through licensing.

Let me first address one ancillary question you may have. There’s a very thin and even artificial line between “artists” and “illustrators.” It’s unlikely that any time in the near future artists who sell creative works for substantial sums are going to be replaced by machines. But people who illustrate simple book covers or do sketches of people’s faces for article bios and similar things can probably be replaced enough by machines that the demand for their work will decline precipitously. Critically, a lot of people whose life work is in the former category make a lot of their living doing work in the latter category. I say all this to make clear that I’m not making some grand distinction between creative artists with a capital-A and illustrators who are mere grunts of the art world. I’m distinguishing between kinds of work that are more or less replicable, while noting that the same people often work in both areas at the same time.

With that out of the way, let’s get back to intellectual property and Elon Musk’s new AI company hoovering up your creative work to put you out of business. Life’s hard. Displacement by technology is the oldest story in the world. But here it’s a little different. All these creative works are intellectual property. And they’re being used for this purpose without permission. So the holders of those IP rights, whether it’s the original creators or other holders of the IP, have some say in the matter. For the moment, the AI companies are just gobbling everything up because the nature of the use is new and not developed in the law. Put simply, the entire development of generative AI in the areas I’ve noted is based on the wholesale theft use of creative work without permission or compensation. In other words, it’s been stolen.

That’s both wrong as a matter of equity and actionable as a matter of law. The entire economics of generative AI is based on accessing and incorporating creative work for free and then selling the generative AI that is based on it.

And in case you’re wondering, yes, I’m glossing over a lot of complexity here. But big picture this is all true.

But I noticed something very interesting when I got into conversations about these intellectual property issues. And here I’ll need to take another brief detour. One of the biggest issues in intellectual property law today is the over-use of intellectual property restrictions. The whole regime of intellectual property exists to give people financial incentives to create new works. Those rights are not supposed to last forever and they’re not supposed to be used to stifle other people’s creativity. Intellectual property can become a barrier to the kind of free-for-all mixing and matching and free ranging exploration and interpretation that is at the heart of all creative work. To a great degree that is what has happened over recent decades. We have a whole rentier class of intellectual property holders who do something very much like that. That in turn has spawned a whole movement trying to weaken those strictures and keep creativity and creative work vital.

In almost every respect I agree with that counter-movement and I have written on numerous particular facets of that push over the years. But when I discussed this with people in that world I got a lot of replies to the effect of, “Well, that isn’t going to help any actual artists or creative people. Whatever fees are generated are just going to go to the conglomerates who hold all the IP.” Or, “That’s just using IP law to stifle and slow down progress on AI.”

It’s always jarring to hear people you normally agree with suddenly sound like they have no idea what they’re talking about. Even more, when they truly have no idea what they’re talking about.

Others made the argument that every new artist learns from their predecessors, creates, copies, adopts. This is no different. That argument is deeply flawed in its conceptual DNA. A person learning from artistic predecessors can’t be compared to a machine that consumes everything and produces at infinite scale.

IP rights can be annoying even to the original creators. I know this. I got very frustrated with the whole thing a couple decades back when I lost a few years worth of columns I’d written for the New York Post (yes, believe it or not). Before TPM and in the early years of TPM I’d written a semi-regular column for the Post. But some group sued to make publishers have to pay residuals in the future for the online versions of the articles. So this led to the Post just taking them all off line. So in exchange for 9 cents of lifetime residuals I could no longer access what I’d written.

Anyway, I get all that. But this isn’t really about getting artists or composers residuals for the destruction of their profession. It’s about exercising some potential control over that process.

One of the great potentials of AI is the idea that you can let AI run through a universe of anonymized medical data to find new patterns and potential new cures for diseases. Something similar could happen to anonymized DNA data. I don’t know whether that’s actually going to happen. But if it can, that’s the kind of great social good it might be worth subordinating some other collective rights for. But I’m not clear what social good is advanced by making AI compose music or write high schoolers’ term papers for them. Maybe those things are inevitable. And if so, the displaced just join centuries worth of skilled craftspeople whose skills were replaced by machines. But to the extent that a proper understanding and enforcement of intellectual property laws can slow down or stop the process, that doesn’t seem like a problem to me. It seems like a positive. At the very least the gains should be shared equitably.

Nor is it all clear cut who the owners are. Google already scans gmail to build patterns for ad targeting. That’s creepy but they say it’s anonymized. And importantly, I’m pretty certain they own rights to do whatever they want with a gagillion emails floating in the ether in Gmail. I’m not saying it’s a good thing. But for better or worse a number of the big AI players probably have already secured enough rights to do their work. But not all of them.

Let me conclude by saying most things I write about I’ve thought about for a very long time. My mind is open but my core convictions tend to be fairly settled, for better or worse. This is very different. It’s very new to me. I’m more puzzling through the different aspects of the question than making arguments I’m sure are right. So I welcome your feedback. And I suspect at least some of my arguments and ideas about this will change.

The core issue to me is that I’m an AI skeptic. I’m not absolutely against it. But I think we should know what we’re getting into and not leave the future to a few people in Silicon Valley on the notion that everything that is new is definitely good. If you’re really going to cure cancer with your AI … well, then full speed ahead. If you’re going to use it to create programs that create meh but good enough pictures and muzak in that I think we can probably afford to take our time.

Latest Editors' Blog
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Deputy Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: