Recent weeks have clarified a few things about Facebook and the broad campaign of Russian interference in the 2016 election. We know a Kremlin-affiliated troll farm spent $100,000 on divisive political ads on the platform, and that Facebook has located and removed a huge number of ads and posts related to the campaign.
The Daily Beast has reported that Facebook won’t share anything but the most cursory assessment of the damage from those ads with the public. The company “declined to commit to releasing information about Russian government-backed Facebook posts, groups, and paid advertisements to the users who encountered them,” according to the Beast.
The obvious question is whether Facebook will do anything more to inform or protect users affected by that abuse of its platform, or else exercise a greater measure of control over the material that appears on it. Experts say the company is unlikely to be compelled to do either unless public pressure on Facebook reaches critical mass.
According to Aaron Mackey, staff attorney at the Electronic Frontier Foundation, there’s not much in the law that can force Facebook to disclose, well, anything.
“The short answer is that as a private entity, there’s no requirement that they provide any additional information,” Mackey said. Facebook did not respond to a request for comment.
Russian operatives manipulating Facebook’s largely automated ad-buying platform are dealing with a kind of information that is totally unprotected by regulation, he said—that makes it different from, say, credit monitoring service Equifax dealing with a huge breach of user data.
A less obvious concern, but perhaps a more serious one, is whether or not the company is even capable of monitoring its 1.3 billion daily active users well enough to stop such sophisticated, clandestine political influence campaigns. Until its disclosure last week about the $100,000 in political ads bought by Russians, Facebook had publicly maintained it had “no evidence” of such buys.
“[Facebook] is something we’ve never seen in history before,” said Rebecca MacKinnon, director of the New America Foundation’s Ranking Digital Rights program. The platform, she said, is a leviathan too large to behave like any other business, and thus to some extent forming its own online zone of lawlessness.
“There’s no precedent,” she said. “How do you govern this thing? How do you hold it accountable? What are the expectations that should be placed on Facebook to be a responsible corporate citizen?”
The company has responded to the pressure of public shaming before, MacKinnon noted. In 2013, when its competitors at Twitter and Google were publishing transparency reports itemizing their dealings with law enforcement, Facebook was a notable holdout—until Edward Snowden’s name started making headlines. MacKinnon credits Facebook’s change of heart to Snowden’s revelation of tech industry cooperation with spy agencies.
“They don’t have a business if they don’t have a basic level of trust,” she said. “I think there are a lot of users with healthy cynicism but if trust drops below a certain level the bottom line is very much affected.”
Facebook’s algorithms and advertising metrics have long been closely held. Details of its operations do leak out from time to time, though: Facebook flatly denied the very existence of an editorial team curating its “trending topics” module—until The Guardian published editorial guidelines for that team. When Facebook fired that secret team of editors in the wake of accusations by Gizmodo that they exercised bias in the topics they promoted, the module went on to be operated solely by an algorithm—and promptly went nuts.
According to a ProPublica story published Thursday, Facebook’s ad business is automated to such a degree that it actually produced bespoke ads for Nazis. The company’s automatically populated categories of interest were sold to advertisers, and those included “jew haters” and people who’d listed their employer as the Nazi Party, according to the report.
Such reliance on automation has to do with Facebook’s scale. The company employed 17,048 people at the end of last year; with its user base of 1.32 billion, the company has one staffer for every 77,000 people and change. That huge employee-customer ratio is unmanageable in human terms, forcing the company to run itself in part through a kind of ad hoc artificial intelligence: a collection of automated user and customer interfaces that shift and blend to meet Facebooker preference and advertiser demand.
That business, as it is run today, is gobsmackingly profitable: 45 percent of Facebook’s $26 billion annual revenue is pure gravy. But without major changes to its daily operations, Facebook runs the risk of eroding the public trust that, as MacKinnon observed, is fundamental to its success.
CEO Mark Zuckerberg has experimented with building more intentional AIs that moderate Facebook content for benign purposes like preventing suicides, presumably with an eye toward expanding that software into the profitable parts of Facebook’s business. One such experiment ended when the AIs began talking to each other in an improvised language the coders were worried they would soon find themselves unable to translate. This was misreported—several publications said the AIs rapidly became too smart—but the truth is a little more worrying: AIs can learn to make incomprehensible decisions, even when they’re not particularly smart.
And if Facebook remains unable to control its leviathan, no one wins.
“In dealing with these problems in a way that serves the public interest from anybody’s perspective—is that just beyond human capability?” MacKinnon asks. “Do you need artificial intelligence? And then do you lose control of the artificial intelligence and stop understanding how that’s working? And then do you have to build another AI to fight it?”
The problem isn’t that Facebook’s management isn’t behaving responsibly, MacKinnon said: It’s that Facebook may be unmanageable.
“The people running it don’t have a full grasp of how information is manipulated across it,” she said. “It’s like a living organism that’s evolving every day.”
So Facebook disclosed it’s cooperation with the U.S. government after revelations by a spy currently being sheltered by the Russian government, but it won’t disclose it’s cooperation with groups affiliated with the Russian government. Got it.
“The problem isn’t that Facebook’s management isn’t behaving responsibly, MacKinnon said: It’s that Facebook may be unmanageable.”
“Forbidden Planet”, but with Facebook instead.
Yikes.
Facebook is a joke. I can no longer post comments on a couple of websites that use only Facebook to allow people to post comments. They keep blocking the accounts I set up because I refuse to use my real name. Using your real name on the internet is not a good idea. Both Politico and Huffington Post not only make you use Facebook to post, but also won’t allow you to read their comments unless you are signed in to a Facebook account. So, I’m done with them.
And the Nazi’s are “monsters from the id”?