The Swedish election and bots on Twitter
A new FOI study reports on the presence on Twitter of automated accounts, so-called “bots,” in connection with the 2018 Swedish election.
“The number of political bots that discuss Swedish politics and the Swedish election on Twitter has increased noticeably during recent weeks,” states Johan Fernquist, an FOI researcher in data science and project leader for the study. “The number of accounts in the material we studied has almost doubled from July to August.”
Bots and automated accounts can be used to spread disinformation or to affect the general public. When content is widely distributed via bots, users of social media can be led to believe that the material is more shared and widely accepted or more mainstream than it actually is.
The bots investigated in the study are of three types. There are those that are automatically controlled by software, while others can be managed either by someone employed to disseminate propaganda, or a private person who copies or retweets content in copious quantities. The effect of the automated behaviour of these various bots is nevertheless the same, regardless of whether a person, or software, is behind the account.
This study analyses how the bots differ from genuine accounts, what the bots link to and the kinds of messages they spread, as well as how widely the political bots are distributed.
- The number of political bots that discuss Swedish politics and the Swedish election on Twitter has grown significantly in in recent weeks
- The percentage of tweets linking to alternative or partisan websites (Samhällsnytt and Fria Tider) is higher among bots than among genuine accounts
- The Sweden Democrats received most support among the total number of accounts expressing support for political parties. This support was expressed more among bots. Specifically, Sweden Democrats received support from 47 per cent of the bots and 28 per cent of the genuine accounts
- Expressions of traditionalist, authoritarian or nationalist views were more common among suspended or deleted accounts.
Previous research shows that influence attempts are less effective if an individual is aware of the attempt. “Hopefully, this study contributes to greater awareness about the potential effects of bots, so that more citizens can make their decisions without being influenced by them,” concludes Johan Fernquist.
The method applied to discover automated accounts is based on machine learning (a form of artificial intelligence), where a model is trained by using data from already-known automated accounts. A total of 140 different characteristics of the accounts were used to distinguish the automated accounts from genuine ones. Data was collected between 5 March and 20 August 2018; in total, nearly 600,000 tweets from 45,000 accounts were examined. The research that the study is based on has been approved according to ethical standards.