If you’re on social media, chances are that you’ve encountered bots: social media accounts set up to run autonomously. Some bots are just for fun. Twitter bot Serval Every Hour, for example, simply posts a photo of a serval (a kind of wild cat) every hour. But recent months have brought a proliferation of a distinctive kind of automated account, colloquially referred to as porn bots.
Most of these bots don’t post any content at all. They simply like other users’ posts in the hopes (if bots had hopes) of bringing traffic to their profile. The bio in the profile usually consists of some inviting phrase such as “looking for my adventure partner” and a link to a webpage. Together with a feminine profile pic and display name, these features are meant to entice viewers to click on the link. I’m neither naïve nor journalist enough to have clicked the links in any bots’ profiles, but the standard candidates for suspicious links are familiar; perhaps they lead to malware, a phishing scheme, or maybe an ad-supported website where views generate revenue. The link’s destination isn’t particularly important for what I want to discuss here. What I want to discuss is the effect these bots have on Twitter users, stemming largely from the fact that these are apparent accounts of women that — though they exist because of someone’s actions — are not properly the accounts of people at all.
Two features of the bots give them a distinctive influence: their appearance as women and their unavoidability. While the bots vary somewhat, the typical bot has a profile picture of a woman (likely stolen) ranging from a typical selfie or vacation photo to more provocative poses. The bots often have only a feminine proper names as their display name. In short, they seem to be accounts of women. The features listed above — the lack of posts, the link in bio, etc. — can clue you in to their status as bots, but many of these features can only be viewed by visiting their profile; at first glance (and before you’ve encountered too many of them), these bots seem to be women.
While one can recognize a porn bot from its profile, they are difficult to avoid. In order to try to weed them out one by one, you need to visit their profile page and click on the icon that brings up the option to block that account. Thus, in order to prevent a bot from intruding on your Twitter notifications, you need to expose yourself to it further by viewing its page, doing so one by one for any bots that start showing up in your notifications. There are more drastic options for avoiding them, but these options also make it harder to connect with human users of the site.
The porn bots raise a number of ethical issues, including the issue of how much agency an adult should have as to when or whether they encounter adult content in a public setting, as well as the issue of the bots’ disruption of genuine interactions in a social media community. As someone who spends a lot of time interacting with friends and treasured mutuals on Twitter, I find the bots distracting. Much of what’s posted on Twitter is deeply unserious, but that doesn’t mean it’s unimportant; and intrusions by bots undermine the sense of community among Twitter users. (The latter issue is satirized succinctly in this image, which was relatable enough to receive 93,000 [mostly human?] likes.)
Beyond their unavoidability and intrusion on community, however, is an issue of injustice: the presence of these bots dehumanizes women by shifting what is reasonable to believe about accounts that appear to belong to women. Encountering these bots shifts one’s perceptions about whether someone with a feminine profile picture and display name is a human being. Once one realizes that these likes come from bots, this shift in perception is hard to avoid — and it’s not even irrational! The proliferation of porn bots actually reduces the probability that someone you encounter with a feminine profile picture and feminine display name is a human being. It’s like throwing a bunch of Skittles into a bag of M&M’s: at first glance you have good reason to be suspicious that any one of them really is what the label says. The proliferation of these bots not only creates an environment in which people are more likely to dismiss an account with a woman’s name and woman’s profile picture as being a nonperson; they create an environment in which these doubts are reasonable.
This dehumanization takes a form somewhat different from (which is not to say better or worse than) the typical objectification of women and girls through oversexualizing them. The use of photos of women for these accounts is a kind of objectification, and the profiles do sexualize them through the suggestion of something salacious just a click away. But until you click through to their profile, the majority of these bots look like, simply, women — women as they often present themselves in public. Again, this is not to say the situation would be more just if the bots’ profiles were more uniformly provocative; only that the ethical issues are sensitive to what the bots actually display and, therefore, whom they (mis)represent and how. And they misrepresent a lot of women by using passably typical profile pictures.
The ensuing situation falls under the broad category of an epistemic injustice — a situation where someone is wronged in their ability to know or to be treated as a source of knowledge, often as a result of their social position. The proliferation of bots that pose as ordinary women undermines the knowledge that any such user, at first glance, is an actual person. Thus, women who use Twitter are at risk of being in a position of needing to distinguish themselves as real persons. (“By the way, I’m someone! I’m one of the real ones!”) Therein lies the distinctive dehumanization. All the unreal copies appearing to be women make it a little bit harder for a woman on Twitter to be recognized as a person.
The environment created by these bots is a small example of the ways in which a culture that sexualizes and objectifies women and girls can fail them as persons. Who we are in our social identities — such as race, class, and gender — depends heavily on who others take us to be. This dependence is why, for example, we wrong someone if we purposely misgender them, act on unfounded assumptions about their ethnicity, or call them by the wrong name. We need others to tell us who we are in order to be who we are. This reciprocity is what philosopher Hilde Lindemann calls holding one another in personhood. The creators of the bots (and those at Twitter failing to prevent their intrusion) support an environment in which it’s harder to uphold each other in our personhood because, for a split second, it can be harder to perceive women on Twitter as people at all.