I just got a new iPhone. I didn’t need a newer version. But my old one was broken in a way that wasn’t easily fixed. So I submitted myself to the hard wheel of planned obsolescence. I’m always happy for ever-improved image quality. Otherwise, for me, it was just a need for a new, undamaged phone. But this is one of the models which Apple tells you very frequently has their AI bundled into the device. Which I’m told is awesome. Or that’s what they’re telling me. A lot. And my sense generally is that Apple is the least over-the-top of the big techs in this regard.
As I’ve been using the new phone, I’ve noticed that the Apple texting app now takes suggested phrases and completing your words to the next level — as in kind of an absurd level.
Of course, for the past few years, Gmail has offered you a range of phrases to respond to simple messages. And Gmail and other programs often anticipate your next word. The one part of this I find — I’ll it admit it — sometimes helpful is that I know if I make a general attempt to spell a word more or less right on a small device, the app will usually know what I meant and adjust to the word I intended spelled correctly. Usually. I guess it’s a bit like speaking French. You get the sound of the last syllable in the ballpark and it more or less works.
In any case, for a while I have felt the machine learning-driven growth of anticipated next words or phrases trying to press itself into my intellection. Not barging in perhaps. But not filling any void. A bit aggressive.
When we write, the next word in order often appears in real time — just in time production, if you will. We don’t know just which word we’re going to write until we’re there. Current machine learning is often good enough with the next single word to know a word that could work — not simply one that makes grammatical and logical sense but one of a handful of words that you might have been about to choose. In an English sentence, often a certain progression of words can only be followed by a very small number of words immediately after it and make any idiomatic sense. But it is not necessarily the word I was going to choose or the one that precisely follows my meaning, the way I use words and whatever idiosyncrasies I bring to writing, good or bad. I feel this general and pervasive push to nudge me in the direction of the mean, the most common next word.
I haven’t generally found this to be a problem because I write fluidly and generally fast. I realized in early middle age that I write in a very specific way. I talk in my head and I transcribe those words. And the way I learned this is that I transcribe in a partly phonetic way. So I will occasionally write a word that sounds almost identical – when it is spoken in a flow between other words – to another word that has a totally different meaning. (Listen and you can tell that the actual pronunciation of a word varies quite a bit depending on the words that come immediately before and after it.) They can be totally different words, ones I’d never mistake for one another when I’m thinking as opposed to transcribing. It looks weird and disconcerting to me (let alone to you) when I read it back with the part of my brain that processes information in conventional English spelling as opposed to phonemes or sounds. In any case, for this reason those next word suggestions seldom have time to register before I’ve written the next word.
But the new Apple texting app with the new hardware (Messages, I think it’s called) doesn’t just suggest next words or simple stuff like “thanks” or “sure” or “sounds great.” It seems to be trying to be anticipate altogether new directions in the conversation. I only just got the phone. So maybe I’m exaggerating a bit based on some limited interactions. But it’s definitely trying to up the ante in a big way. Do I still have a role here? Can we leave to me if I want to suggest lunch? I don’t have any particularly deep think on this other than the pretty obvious, and a general what the actual fuck? The interface seems to coat the letters of the words in a colorful shimmery frost when it wants to nudge me into the shotgun seat and just drive all on its own.
If you’ve read other pieces I’ve written about AI you can probably see where I’m going with this. The iPhone texting overhaul is of a piece with almost everything (not everything but almost) about AI at this moment, which is overwhelmingly a supply- rather than a demand-driven revolution.
They’ve got a lot of it and it’s cool and they want to ship it to market. There are a lot of things machine learning is definitely helpful with. In pre-machine learning terms, correcting my spelling is helpful. I’ve never been a good speller. But spelling has nothing to do with writing or language or communication. It’s purely technical. Proper spelling is an artifact of standardization and fairly recent. Helping me standardize or bring my spelling into line with spelling standardization is great. At a minimum, it limits people annoying me by emailing me to tell me how to spell a word. Same with those little squiggly lines that point out when I’ve probably left out a word; it’s hard to transcribe as fast as I speak in my head. But these other things really aren’t things I need help with or things I think many people even want help with. The direction of all this seems to be to what I guess we might call enslopenize a lot of momentary or not so momentary communication. Things that pop into our heads, where neologisms and slang come from, the little abbreviations which are so deeply embedded in digital communications.
I could take this in a doleful direction — Silicon Valley force-feeding us AI to lose or get shoved out of the momentary spaces of creativity that makes us human and individual. But that’s not the way I tend to think about stuff. And more importantly, I don’t think that’s generally what’s happening here. I think it’s supply overwhelming demand, which is a very old story with generally predictable results. If there were enough demand in the places where I suspect AI is really powerful — finding patterns in massive troves of data which humans can then review, doing calculations in aerial dogfights faster than any human possibly can and whatever other things I can’t think of — they wouldn’t be so pervasively coming up with these silly things that I think most people find annoying. That logic is mathematical in its clarity. Because this stuff isn’t cheap. I’m not a fool: I know there are a lot of things that current machine learning does that are very powerful and very valuable in concrete monetary terms. But as long as there’s this excess capacity driving the tech titans to force-feed me things are either silly or that are just easier to do myself (how much time do I save clicking the option to add ‘I’m’ as opposed to just typing ‘I’m’?), I’ll know there’s a hard economic disconnect in the works.