An edited version of this article first appeared in Village magazine, November 2017 edition
One in four twitter followers of Philip Boucher Hayes are fake accounts, the RTÉ broadcaster announced on his twitter feed recently. Around the end of August, Boucher-Hayes had noticed an uptick in new followers on Twitter, which he had monitored since.
“Previously 100/150 people would follow me every week,” Boucher-Hayes posted on Twitter. “Suddenly it became 800/1500 a week. Most had Irish sounding names. None had tweeted. They were all following the same high profile Irish accounts.”
Boucher Hayes noted that many of the accounts had usernames consisting of a name followed by a series of random digits, such as @John87654321 or @Mary12345678. This pattern, suggestive of names being mass generated automatically, had also been seen earlier in the year among many “Brexit-bots” in the UK.
Although Boucher Hayes reported the increase in fake followers to Twitter, it continued unchecked.
“Either most of the high profile Irish accounts have grossly inflated numbers of followers (which is admittedly a bit of a “so what?”) or someone is amassing a very large Twitter mob for some as yet unidentified purpose,” Boucher Hayes posted.
“Either way it further erodes confidence in an increasingly compromised platform. Twitter doesn’t seem worried, maybe its users will be.”
The same phenomenon may also account for the large numbers of fake followers identified for the @rte2fm radio account by the anonymous account of “secret rte producer” (@rtesecretpro), and would certainly make more sense than the national broadcaster spending licence fee money to boost a social media headcount.
In an SEC filing, Twitter estimated that bots accounted for 8.5% (or one in twelve) of its users. Bots in turn can be divided into subgroupings. Spambots post URLs, hoping to encourage users to click on them, either to sell a product, or to lead users to a malicious website, which can infect their browsers and take over their laptops or phones. By contrast, influence bots seek to influence public opinion, whether by spamming hashtags, promoting artificial trends, pushing smear campaigns and death campaigns, or boosting political propaganda.
“Artificial trends can bury real trends, keeping them off the public and media’s radar. Smear campaigns and death threats can both intimidate vocal opponents and dissuade would-be speakers. The link between propaganda and legitimate political speech is a fine one, of course, and in some cases is entirely in the eye of the beholder. Nevertheless, bots can be used to amplify the propagandist’s desired message,” noted Nathalie Marechal, a researcher with the University of California, writing in the International Journal of Communication in 2016.
A 2016 study found that Twitter’s algorithms would eliminate a bot which tweeted spam links, but would not delete the accounts which retweeted the original post. This meant bot networks could all retweet a message hundreds of time, at the loss of only a handful of original tweeting accounts each time.
Analysts at the university of Washington in Seattle studied a network which they named the Syrian Social Botnet, which worked not only by posting pro-Assad news and promoting astroturfing, but by flooding timelines with irrelevant news. A hashtag about the Syrian civil war would be flooded with irrelevant reports about other stories, for example from Hurricane Sandy, flooding the system with noise and making the hashtag useless for search purposes, a practice known as smokescreening.
Another network – the Star Wars botnet – discovered by researchers at University College London. numbered over 300,000 accounts, was so-called because the accounts each posted random snippets of text from Star Wars novels in the minutes after they were set up. A large number of the bots followed a handful of real users, and seems to have been built for this purpose, sold to users who wanted to inflate their follower counts and exaggerate their popularity.
Bots can also be used to create page impressions, as twitter and Facebook accounts are often used as logins by readers of news sites. This could exaggerate page views and ad impressions on websites seeking to defraud advertisers.
A second botnet uncovered by the same London-based researchers numbered over 500,000, and was behind a large scale spamming attack on Twitter in 2012.
Gavin Sheridan, who worked as innovation director with Storyful, the News Corp owned online news verification company started by Mark Little in 2010, says it is not possible to determine who might be behind this nascent bot army until it is activated. (And indeed, now that it had been noticed, its usefulness may have been diminished to such an extent that is never used).
“I’ve read a lot of research, and I’ve seen the bot armies myself,” says Sheridan. “There were bot armies for California leaving the union, for Texas laeving the union, there are pro-Erdogan ones in Turkey, one for Catalonia, one for Scotland leaving the UK, all bot armies in some shape or form.”
“I started looking at [the Irish botnet] about two weeks ago, I wasn’t being followed by them but I noticed them following other people, a couple of people contacted me and said that they seemed to be followed by strange accounts.
“There’s a couple of interesting things about these bots. One thing is the rapidity with which they are following certain users, the second thing is that they appear to have Irish-sounding names, not all of them but a certain number, so if I look at, say, a prominent member of the Repeal the Eighth movement, I’ll see that of the last 50 followers, about half are newly set up, recent accounts in the last few weeks who have never tweeted, have no other activity.
“Some follow 50, some follow 80 accounts, that include people prominent in the Repeal the Eighth campaign. I’d have to analyse every single account to see if they follow people on the other side of the debate, but so far they’re also following sports people, they’re following the Late Late Show, they’re following media personalities, they’re following journalists.
“Until they start doing something, all we can say for certain is lots of accounts have been created, lots of them have not bothered to set up an avatar or a background image, lots of them have never tweeted, lots of them have never done anything except follow a default list of accounts that include media, sports and famous individuals.”
“They could become spam accounts, they could become sock puppet accounts, but until they start tweeting it’s hard to determine what their objective is.”
A twitter spokesman contacted about this story said the company could not comment on individual accounts for security and privacy reasons.
Mike Hind, a British investigative journalist who looks at bot networks during the UK Brexit referendum campaign, noted in a series of tweets in October that the purpose of bot armies is not to persuade, but to “astroturf”, creating the false impression of a large grassroots movement.
Coming at the same time as news that Facebook and Twitter accepted money for Russia political advertising during the US presidential elections (paid for in roubles, and astounded Senator Al Franken noted during congressional hearings) the latest bot network does raise some interesting questions.
The most notorious manufacturer of bot armies to date is a Russian group known as the Internet Research Agency, or less formally, the Troll Factory, which at one point operated out of a nondescript office building at 55 Savushkina Street, Petersburg.
But why would Putin’s propaganda departments suddenly take an interest in Ireland? One answer of course is, maybe they didn’t. If the intended target if the planned referendum on the repeal of the Eighth Amendment, then a well-funded activist on either side of the debate could equally have decided to mimic the (Russian) IRA’s tactics. A snap general election is always a possibility of course, and the botnet could be longterm planning for that instance, but the only other major electoral event currently scheduled for next year is the presidential election. While it seems unlikely that this might be the target of a campaign, there would be a certain irony if this were the case with an army of twitter accounts mobilising to shift public opinion, as by one version of events the outcome of the last election in 2011 was decided by a single tweet from a fake account.
Similarly Breitbart, the right-wing extremist website run by former Trump special advisor Stephen Bannon, previously sought to influence elections in Germany and France, and there is no reason why he might not also want to shape the politics of an English-speaking EU member state which is home to many American businesses.
Hind noted that the bots “respond en masse to journalists so journalists will believe their perspective is newsworthy as reflecting a view among the people”. In the same way, bots also follow and target politicians and policymakers.
A report in The Times in August highlighted the case of “David Jones”, @DavidJo52951945, an account with over 100,000 followers, which according to the UK Independent spent four years “tweeting prolifically in support of Ukip, Brexit, Donald Trump, Bashar al-Assad and – tellingly – Russia.”
The old internet adage, “Do Not Feed The Trolls”, applies to these bot armies. Their objective is to manipulate others – particularly high profile accounts – into amplifying their message. As counter-intuitive as it seems, often the only winning move is not to play.