In the on-going tragedy of the expulsion and massacre of the Rohingya from Myanmar there’s a small part of the story which overlaps with our domestic discussion of the way social media platforms have been used to sow propaganda, hate speech, fake news and even become the tools of foreign intelligence organizations. It turns out that Facebook has been one of the primary channels for organizing the expulsion and the incitement of religious/ethnic hatred and vigilanteism which is a key part of it.
Much of the central role of Facebook, apparently, is tied to the fact that much of Myanmar until very recently had little modern media infrastructure. Then cell phones grew rapidly. So Facebook is a key way many people get their news. It’s a singular or near singular source of news to a far greater degree than in the developed world or many other parts of the developing world.
I’ve thought of this from a number of perspectives. Earlier genocides and expulsions and pogroms found their own ways of incitement to violence and spreading of memes of hatred. There were phones, radio, newspapers, television. In this sense, it’s not fair to say it’s Facebook or a Facebook problem. Facebook is just the latest media and communications medium. We hardly blame the technology of the book for spreading anti-Semitism via the notorious Protocols of the Elders of Zion, even though books, mass literacy, and book distribution channels books travel over made its mass distribution and absorption possible.
But of course, it’s not that simple. Social media platforms have distinct features that earlier communications media did not. The interactive nature of the media, the collection of data which is then run through algorithms and artificial intelligence creates something different. All social media platforms are engineered with two basic goals: maximize the time you spend on the platform and make advertising as effective and thus as lucrative as possible. This means that social media can never be simply a common carrier, a distribution technology that has no substantial influence over the nature of the communication that travels over it. It’s not like phones or broadcast spectrum. I grant that this isn’t a perfect or complete distinction. There’s some blurring at the edges. But it’s a substantial difference which deprives social media platforms of the kind of hands-off logic that would make it ridiculous to say phones are bad or the phone company is responsible if planning for a mass murder was carried out over the phone.
Perhaps the most obvious analogy is staring me in the face: the Internet. Clearly, the Internet has been a boon for all manner of hate speech, propaganda, false information and so forth. But the same could be and rightly was said about books. Media is dangerous. But the Internet doesn’t ‘do’ anything more than make the distribution of information more efficient and radically lower the formal, informal and financial barriers to entry that used to stand in the way of various marginalized ideas. Social media can never plead innocence like this because the platforms are designed to addict you and convince you of things.
I raise this because I’ve been giving a lot of thought to what feasible and reasonable solutions there may to what we saw during the 2016 election or more generally bad actors using these platforms to traffic propaganda, hate speech and the like. I spoke to a number of people who are highly knowledgeable about the financial, strategic, technological and data science aspects of the question. But the biggest thing I learned wasn’t complex or technical at all. It was actually fairly commonsensical and actually something I was chagrined I hadn’t thought of before.
If the question is: what can social media platforms do to protect against government-backed subversion campaigns like the one we saw in the 2016 campaign the best answer is, we don’t know. And we don’t know for a simple reason: they haven’t tried.
Let me be clear what I mean by this. Modern search technology is insanely more complex and effective than what we used in the first years of the Internet. It keeps getting better because owning search, as Google does, is a lock on unimaginable wealth. Think of image pattern recognition and a bunch of other uses of data and artificial intelligence that we now take for granted or which are operating in the background and we rely on but do not know about. Some of these development paths are linear, others exponential. Some an engineer in 1985 would have known were possible, albeit, with years of work, others would have been unimaginable. The point is straightforward: the mass collection of data, harnessed to modern computing power and the chance to amass unimaginable wealth has spurred vast technological innovation.
Why this doesn’t happen is obvious. It’s not part of the business model. It’s not tied to profit or even relevant to profit. This isn’t an attack on the profit motive. Every business works that way. But every form of economic activity can create negative externalities. Industrial production creates pollution. Transportation creates noise and certain numbers of injuries. One key function of government and society is to make sure negative externalities get figured into the cost of economic activity. Otherwise, the damage goes unmitigated or society at large picks up the costs. Profits are privatized and costs are socialized, as the phrase goes. Whether this requires government regulation I’ll leave to another discussion. But ‘social media’ is distinct from other media distribution systems in that it can never be divorced from its content. It’s not like phones or the internet in general or broadcast spectrum. But these negative impacts have gotten surprisingly little attention until very recently. And how effectively they can be mitigated is simply unknown because the people who run the social networks really haven’t tried.