Political Bots

Political bots

The Oxford Internet Institute has done extensive research on this subject, referring to these bots as another form of computational propaganda.

An ethnography of propaganda bots would yield the

Psychological operations

There are multiple accounts in the media about the power of bot farms to sway political opinions and to spread false information, but this is by no means a new technique.

One can easily draw a parallel to the equivalent of what psychological warfare operations were throughout time. Also called PSYOPs, they were planned maneuvers used by multiple states to incite behaviour that would be favourable to the initiating government in foreign countries.

Some famous tactics include creating phony organisations (like the British army did during WW2 to deceive the Axis in regards to the strength of the Allied forces), slowing organisational processes (CIA's Simple Sabotage Manual that included ) or leafleting to persuade citizens of certain claims.

One of the most practiced tactics was to throw political pamphlets above areas of interest. The habit of throwing massive amounts of propaganda messages and seeing what sticks is not too far from posting massive amounts of propaganda messages and seeing what sticks. And while many media agencies are spreading fear using fantastic headlines among the general population about this new form of threat, the question more accurately should be, isn't this form of message propagation just fulfilling the platform logic? The ways in which the bots operate are following the design.

Democratisation of propaganda

However, there is one distinct factor about bots that makes them decidedly more interesting than the psychological warfare predecessors: which is the fact that now the process of making propaganda has been democratised: anyone with a computer, or anyone willing to spend a reasonable amount of money to hire bots to serve their cause can immediately bolster their voice.

Meaning that any kind of propaganda, even personal, can be spread.

Tinder bot

A counterexample is the Tinder bot intervention by a group of young activists, who sent out 30,000-40,000 messages thereby targeting 18-25 year olds in the wake of the UK's general election from 2017. After crowdfunding the sum of £500, Yara Rodrigues Fowler and Charlotte Goodman developed a tool that took over Tinder user profiles of volunteers and chatted up possible voters on their behalf. The bot authors say that if “the user was voting for a right-wing party or was unsure, the bot sent a list of Labour policies, or a criticism of Tory policies,” with the aim “of getting voters to help oust the Conservative government.” Although this is an interesting case of self-organised civic political interference, it is rather problematic on the ethical front. The users involved were not aware they were talking to an automated agent, or that their romantic interests were exploited so as to persuade them into voting.

Media literacy, critical thinking

Very often the solution to the poisoned well of digital infrastructures is said to be media literacy. If we can get more people to think critically and have a good digital hygiene, surely it will solve all our problems.

“If we’re not careful, “media literacy” and “critical thinking” will simply be deployed as an assertion of authority over epistemology.” danah boyd

The problem with this is that critical thinking does not necessarily equate with informed thinking. In fact, many conspirationist views of the world start by egging their listeners to do exactly that: “think for themselves”.

While the question of what would infrastructural relief look like still remains very open, it can be argued that some of the interventions we are seeing by troll farms are also responding to a perceived sense of urgency. And the methods they have developed are highly sophisticated.

Ethics of bots: propaganda and persuasion

How can activists apply similar methods and use bots as a tool for propagation of their messages without falling into the same patterns? And is propaganda different from persuasion?

Here opinions are divided, most propaganda studies would claim there is a difference between propaganda and rhetorics. In our current situation there is a lack of digital spaces for public rhetorics. Instead, what is enhanced is the commercialization of our private lives through social media.

However, there are also those who argue that language is by design propagandistic. Among these voices is also that of Lucy Lippard, who pushes forward a feminist theory that states that the intrinsic objective of language is to convince. She says feminists “…have to keep in the back of our minds that we wouldn’t have to use the denigrated word ‘propaganda’ for what is, in fact, education, if it weren’t consistently used against us.”

"The master's tools will never dismantle the master's house." Audrey Lorde

Many reputable media sources are pushing the tactic of fighting propaganda with propaganda as a fix, when ultimately it is the digital platforms on which these information wars are happening, that have the most to win. Platforms like Twitter, Facebook or Youtube have more reasons to justify to marketers they should invest in advertising on their platforms, if they generate a high number of users.

Twitter has been in the eye of the hurricane when it came to the discussions around bot propaganda. After a long silence in which they did not make any statements about the supposed influence of the new political actors, in February this year, they purged thousands of fake accounts, causing the hashtag #twitterlockdown to trend among conspirationist conservative circles. The company did not make any indications of how they made the selection to purge only some of the bots because they say it will undo their efforts as programmers will adjust to it. Considering the economy and ecology of bots on Twitter, the response leaves many question marks open. It is interesting to contrast this approach to the one used by Wikipedia, who is lacking the economic incentives to obscure the sentience of its users. On Wikipedia, one of the many politicies that bots are required to follow is to be identified as such in their usernames, to describe what its purpose is, and to identify the human user that is responsible for the bot.

Agenda of Evil

Another example which I would like to mention is one of Erin Gallagher's excellent analyses of the propagation of hate messages, which I really encounrage you to go online and read. She gives the example of Agenda Of Evil, a network of spreading anti-Islam messages over multiple social media channels: Twitter, Facebook, Youtube, Gab, Reddit, Minds and Google+. The Agenda Of Evil website is a content aggregator and a distribution hub. By logging in, a user gives access to a bot to post hate speech on their behalf.

The network requires a collaboration between humans and automated actors, where human accounts with a high number of followers first post a link to a URL they want amplified. Ben Nimmo refers to these actors as shepherds, sheepdogs, and electric sheep, where shepherds and sheepdogs are the human accounts which initiate, propagate and defend the message and the electric sheep are, of course, the bots, which blindly retweet.

What is noticeable when we look at networks of propaganda is that they are highly organised and complex systems that follow a cyborg logic. Their success is in part due to the good collaboration between humans and bots.