← Return to search results
Back to Prindle Institute
Technology

Twitter Bots and Trust

By Kenneth Boyd
26 Feb 2020
photograph of chat bot figurine in front of computer
“Symbol for a chat bot …” by patrick.daxenbichler (via depositphotos)

Twitter has once again been in the news lately, which you know can’t be a good thing. The platform recently made two sets of headlines: in the first, news broke that a number of Twitter accounts were making identical tweets in support of Mike Bloomberg and his presidential campaign, and in the second, reports came out of a significant number of bots making tweets denying the reality of human-made climate change.

While these incidents are different in a number of ways, they both illustrate one of the biggest problems with Twitter: given that we might not know anything about who is making an actual tweet – whether it is a real person, a paid shill, or a bot – it is difficult to know who or what to trust. This is especially problematic when it comes to the kind of disinformation tweeted out by bots about issues like climate change, where it can not only be difficult to tell whether it comes from a trustworthy source, but also whether the content of the tweet makes any sense.

Here’s the worry: let’s say that I see a tweet declaring that “anthropogenic climate change will result in sea levels rising 26-55 cm. in the 21st century with a 67% confidence interval.” Not being a scientist myself, I don’t have a good sense of whether or not this is true. Furthermore, if I were to look into the matter there’s a good chance that I wouldn’t be able to determine whether the relevant studies that were performed were good ones, whether the prediction models were accurate, etc. In other words, I don’t have much to go on when determining whether I should accept what is tweeted out at me.

This problem is an example of what epistemologists have referred to as the problem of expert testimony: if someone tells me something that I don’t know anything about, then it’s difficult for me, as a layperson, to be critical of what they’re telling me. After all, I’m not an expert, and I probably don’t have the time to go and do the research myself. Instead, I have to accept or reject the information on the basis of whether I think the person providing me with information is someone I should listen to. One of the problems with receiving such information over Twitter, then, is that it’s very easy to prey on that trust.

Consider, for example, a tweet from a climate-change denier bot that stated “Get real, CNN: ‘Climate Change’ dogma is religion, not science.” While this tweet does not provide any particular reason to think that climate science is “dogma” or “religion,” it can create doubt in other information from trustworthy sources. One of the co-authors of the bot study worries that these kinds of messages can also create an illusion of “a diversity of opinion,” with the result that people “will weaken their support for climate science.”

The problem with the pro-Bloomberg tweets is similar: without a way of determining whether a tweet is actually coming from a real person as opposed to a bot or a paid shill, messages that defend Bloomberg may be ones intended to create doubt in tweets that are critical of him. Of course, in Bloomberg’s case it was a relatively simple matter to determine that the messages were not, in fact, genuine expressions of support for the former mayor, as dozens of tweets were identical in content. But a competently run network of bots could potentially have a much greater impact.

What should one do in this situation? As has been written about before here, it is always a good idea to be extra vigilant when it comes to getting one’s information from Twitter. But our epistemologist friends might be able to help us out with some more specific advice. When dealing with information that we can’t evaluate on the basis of content alone – say, because it’s about something that I don’t really know much about – we can look to some other evidence about the providers of that information in order to determine whether we should accept it.

For instance, philosopher Elizabeth Anderson has argued that there are generally three categories of evidence that we can appeal to when trying to decide whether we should accept some information: someone’s expertise (with factors including testifier credentials, and whether they have published and are recognized in their field), their honesty (including evidence about conflicts of interest, dishonesty and academic fraud, and making misleading statement), and the extent to which they display epistemic responsibility (including evidence about the ways in which one has engaged with the scientific community in general and their peers specifically). This kind of evidence isn’t a perfect indication of whether someone is trustworthy, and it might not be the easiest to find. When one is trying to get good information from an environment that is potentially infested with bots and other sources of misleading information, though, gathering as much evidence as one can about one’s source may be the most prudent thing to do.

Ken Boyd holds a PhD in philosophy from the University of Toronto. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com
Related Stories