In recent weeks, Facebook has received the lion’s share of attention when it comes to the social media component of Russia’s interference in the U.S. election. But the service the President so frequently and famously uses hasn’t received quite the same level of scrutiny yet—perhaps because it’s much harder to nail down exactly what happened on Twitter during the 2016 campaign.
Much of the activity on Twitter is a morass of bot traffic, spam accounts mobbing hashtags and plain old harassment, so teasing out the Twitter component of a coordinated influence campaign that spanned multiple platforms is a seriously tall order. Sens. Mark Warner (D-VA) and Amy Klobuchar (D-MN) have proposed some of the first regulations that would specifically affect Twitter and Facebook; a Twitter spokesman told TPM that, regarding regulation, “we are open to discussing this with the FEC and Congress.”
There are a few facts about Russian-linked activity on Twitter during the 2016 campaign we already know thanks to published reports, but there’s much more that remains unclear. Answers to some of those unanswered questions could emerge from Twitter’s closed-door meeting with the Senate Intelligence Committee on Wednesday.
The primary arms of the Russian disinformation campaign operated on Twitter—in fact, you still can visit the Twitter pages for DCLeaks and Guccifer 2.0, two of the outlets for emails stolen from Democratic organizations and operatives.
Twitter has a laissez-faire attitude toward who can and can’t use its network; short of distributing something illegal or advocating violence—and sometimes even then—users can do pretty much whatever they want with impunity. In this case, it appears to have given useful platforms to what the U.S. intelligence community says were fronts for a Russian intelligence service.
The Guccifer 2.0 and DCLeaks accounts haven’t tweeted since January 2017 and December 2016, respectively.
Russian intelligence also used networks of automated accounts, or social botnets, on Twitter, although it’s hard to tell which were actually harnessed by the GRU and which were simply a function of Russia’s burgeoning cybercrime industry. Much of the work that has been done tracking bot accounts is inductive, which has made the task of labeling bot accounts a perilous one. Plenty of amateur Trump-Russia sleuths have managed to look foolish for accusing run-of-the-mill conservative Twitter users of being Russian bots.
But some of the reasoning is convincing and comes from reliable sources. Cybersecurity researcher Brian Krebs, formerly a reporter for the Washington Post, noted that any time he criticized Putin, it mysteriously generated defensive tweets about Trump. He also observed that the service’s like and retweet buttons were being used as part of a strategic offense.
Russian social botnets appear to have been used to promote a lot of far-right news hashtags, according to Hamilton 68, a program that tracks probable bots of Russian origin. This is in itself not especially unusual. Twitter charges to promote tweets and tags on its service, so an underhanded advertiser may feel the need to promote its work through a network of linked accounts that will get it the requisite number of likes and retweets.
But a January report from the Office of the Director of National Intelligence (DNI) in noted that Russian state-affiliated bloggers had prepared such a campaign for Clinton’s victory. “Before he election, Russian diplomats had publicly denounced the US electoral process and were prepared to publicly call into question the validity of the results,” the report’s authors wrote. “Pro- Kremlin bloggers had prepared a Twitter campaign, #DemocracyRIP, on election night in anticipation of Secretary Clinton’s victory, judging from their social media activity.”
At other moments, Russian Twitter users glommed onto the far-right news of the day, including the conspiracy theory that murdered Democratic National Committee staffer Seth Rich had something to do with the stolen emails.
The @tpartynews account had some 22,000 followers and regularly insulted Black Lives Matter activists. The account was followed by former Trump advisor Sebastian Gorka, who himself has been linked to far-right racist and anti-Semitic groups in Hungary.
Bot traffic on Twitter is vast. While it accounted for 33 percent of pro-Trump tweets during the run-up to the 2016 election, it also accounted for 22 percent of pro-Clinton tweets. It’s very difficult to tell which tweets are of Russian origin and which Russian tweets are part of a Kremlin influence campaign. Much of this simply speaks to a vulnerability on the platform that activists have been complaining about for years: Twitter’s sign-up process is very simple and open to abuse by anyone who, for whatever reason, wants to promote a malicious agenda or harass other users.
We now know Russian operators used Facebook to run ad campaigns around divisive social issues. They made use of the company’s microtargeting capabilities, which are especially effective at locating people who may be sympathetic to the deluge of anti-Clinton, pro-Trump news that the GRU had already seeded through WikiLeaks, Guccifer 2.0 and DCLeaks. Twitter hasn’t yet answered the question of whether Russian intelligence was able to operate to its satisfaction merely using botnets and sock-puppet accounts like @tpartynews, or whether it needed to buy promoted tweets or hashtags; so far there’s no evidence that it did.
One group tracking Russian bots notes that many of them haven’t stopped tweeting. In fact, they tweeted in support of alt-right groups in the aftermath of the slaying of Heather Heyer at a white supremacist rally in Charlottesville, Virginia. Again, some of this is inductive reasoning: ProPublica identified one account as a bot by noting it used a stolen photo, sent five tweets in a single minute that all used a URL shortener, and that the account’s tweets “were reported to use similar language from Russian government–backed outlets Sputnik and RT.” Of course, all this could be true of a human account, too.
Twitter is due on Capitol Hill Wednesday and Thursday. The company has thus far been tight-lipped about its strategy for dealing with malicious foreign governments trying to tamper in each others’ elections—similar influence campaigns in France and Germany have taken place since the American election. The company may come up with some kind of internal proposal for enhancing its ability to detect and root out activity like the GRU influence campaign in much the same way it, along with Facebook, has agreed to help the U.S. deal with social media accounts run by the Islamic State.