← Return to search results
Back to Prindle Institute

Accountability for the Boy Who Cried “Misinformation”

It is no secret that we’re obsessed with information right now, particularly the spread of misinformation. The internet age allowed for the transmission of data on a scale never before seen, including “fake news” or, as the preferred nomenclature would have it, misinformation and disinformation. Now, AI can generate and propagate false information at will.

But our zeal in seeing misinformation stamped out has clouded our recognition of the ways the concept can be abused. Misleading information can be problematic, but our response to it can be equally troubling. We’ve contorted the way we discuss contentious issues and complicated our understanding of accountability.

Many of the most egregious examples of false information being spread are done specifically for the purposes of misleading others. For example, since the beginning of the Israel-Hamas war there has been an increase of disinformation about what is actually happening and what each side is doing. Keeping the record straight has been difficult, different groups look to vilify others. It’s easy to confuse the public and sow social instability when they’re not able to see what is happening first-hand. This has been the case, for example, with the disinformation being spread regarding the Russia-Ukraine war.

But apart from geopolitical conflicts, we’ve seen false information being spread about climate change in order to protect financial interests. Political disinformation is also undermining democratic dialogue, and combined with cases of misinformation – cases where deception is not intentional – we can see how the spread of rumor and lies helped fuel the protestors who stormed Congress on January 6th. Cases like this remind us that false or misleading information can be spread deliberately by people seeking to accomplish a particular goal or merely by those who are passing on the latest internet debris they take to be valid.

Both disinformation and misinformation can undermine the ability of society to recognize and respond to social problems. During the pandemic, for instance, massive amounts of false information made it difficult to manage public health, leading to needless deaths. As philosopher John Dewey warned, “We have the physical tools of communication as never before. The thoughts and aspirations congruous with them are not communicated, and hence not common. Without such communication the public will remain shadowy and formless.” Ultimately, misinformation and disinformation prevent the formation of the common understanding necessary to ensure that shared problems can be collectively addressed.

There is, however, potential to abuse the concept of “fake news” for one’s personal ends. You can always accuse others of intentionally peddling fictions – calling out certain kinds of misinformation while conveniently ignoring others when they suit your interests.

Consider this example: A recent study showed that only 3% of Earth’s ecosystems are intact. This finding was very different from those of previous studies, because the study redefined what “intactness” meant. Without a consistent scientific definition of terms, different studies will produce incommensurate results. This means it would be easy to accuse someone of engaging in misinformation if a) they are using a different study than I am and b) I don’t make discrepancies like this clear.

While the crusade to stamp out misinformation seems honorable, it can quickly lead to chaos. It’s important to recognize that scientific findings will conflict when employing different conceptual frameworks and methodologies, and that scientific studies can often be unreliable. It can be tempting to claim that because at least one expert claims something or because one study reaches a certain conclusion, you have the Truth and to contradict it represents misinformation. It can be more tempting still to simply accuse others of misinformation without explanation and write off entire points of view.

The way we liberally label misinformation makes it easy to engage in censorship. Today, there are concerns expressed about the media’s initial coverage of the “lab leak theory,” which may have stifled discussion by immediately branding it as misinformation. This is significant because if there is a widespread public perception that certain ideas are unfairly being dismissed as misinformation, it will undermine public trust. As Dewey also warned, “Whatever obstructs and restricts publicity, limits and distorts public opinion and checks and distorts thinking on social affairs.”

These are dangerous temptations, and it means that we must hold ourselves and others accountable, both for the information we pass on as well as the accusations we throw around.

Specious Content and the Need for Experts

photograph of freestanding faucet on lake

A recent tweet shows what looks to be a photo of a woman wearing a kimono. It looks authentic enough, although not knowing much about kimonos myself I couldn’t tell you much about it. After learning that the image is AI-generated, my opinion hasn’t really changed: it looks fine to me, and if I ever needed to use a photo of someone wearing a kimono, I may very well choose something that looked the same.

However, reading further we see that the image is full of flaws. According to the author of the tweet who identifies themselves as a kimono consultant, the fabric doesn’t hang correctly, and there are pieces seemingly jutting out of nowhere. Folds are in the wrong place, the adornments are wrong, and nothing really matches. Perhaps most egregiously, it is styled in a way that is reserved only for the deceased, which would make the person someone who was committing a serious faux pas, or a zombie.

While mistakes like these would fly under the radar of the vast majority of those viewing it, it’s indicative of the ability of AI-powered generative image and text programs to produce content that appears authoritative but is riddled with errors.

Let’s give this kind of content a name: specious content. It’s the kind of content – be it in the form of text, images, or video – that appears to be plausible or realistic on its surface, but is false or misleading in a way that can only be identified with some effort and relevant knowledge. While there was specious content before AI programs became ubiquitous, the ability of such programs to produce content on a massive scale for free significantly increases the likelihood of misleading users and causing harm.

Given the importance of identifying AI-generated text and images, what should our approach be when dealing with content that we suspect is specious? The most common advice seems to be that we should rely on our own powers of observation. However, this approach may very well do more harm than good.

A quick Googling of how to avoid being fooled by AI-generated images will turn up much of the same advice: look closely and see if anything looks weird. Media outlets have been quick to point out that AI image-generating tools often mess up hands and fingers, that sometimes glasses don’t quite fit right on someone’s face, or that body parts or clothes overlap in places where they shouldn’t. A recent New York Times article goes even further and suggests that people look for mismatched fashion accessories, eyes that are too symmetrically spaced, glasses with mismatching end pieces, indents in ears, weird stuff in someone’s hair, and a blurred background.

The problem with all these suggestions is that they’re either so obvious as to not be worth mentioning, or so subtle that they would escape noticing even under scrutiny.

If an image portrays someone with three arms you are probably confident enough already that the image isn’t real. But people blur their backgrounds on purpose all the time, sometimes they have weird stuff in their hair, and whether a face is “too symmetrical” is a judgment beyond the ability of most people.

A study recently discussed in Scientific American underscores how scrutinizing a picture for signs of imperfections is a strategy that’s doomed to fail. It found that while participants performed no better than chance at identifying AI-generated images without any instruction, their detection rate increased by a mere 10% after reading advice on how to look closely for imperfections. With AI technology getting better every day, it seems likely that even these meager improvements won’t last long.

We’re not only bad at analyzing specious content, but going through checklists of subtle indicators is just going to make things worse. The problem is that it’s easy to interpret the lack of noticeable mistakes as a mark of authenticity: if we are unable to locate any signs that an image is fake, then we may be more likely to think that it’s genuine, even though the problems may be too subtle for us to notice. Or we might simply not be knowledgeable or patient enough to find them. In the case of the kimono picture, for example, what might be glaringly obvious to someone who was familiar with kimonos goes straight over my head.

But these problems also guide us to better ways of dealing with specious content. Instead of relying on our own limited capacity to notice mistakes in AI-generated images, we should outsource these tasks.

One new approach to detecting these images comes from AI itself: as tools to produce images have improved, so have tools that have been designed to detect those images (although it seems as though the former is winning, for now).

The other place to look for help is from experts. Philosophers debate about what, exactly, makes an expert, but in general, they typically possess a lot of knowledge and understanding of a subject, make reliable judgments about matters within their domain of expertise, are often considered authoritative, and can explain concepts to others. While identifying experts is not always straightforward, what will perhaps become a salient marker of expertise in the current age of AI will be one’s ability to distinguish specious content from that which is trustworthy.

While we certainly can’t get expert advice for every piece of AI-generated content we might come across, increasing amounts of authoritative-looking nonsense should cause us to recognize our own limitations and attempt to look to those who possess expertise in a relevant area. While even experts are sometimes prone to being fooled by AI-generated content, the track record of non-experts should lead us to stop looking closely for weird hands and overly-symmetrical features and start looking for outside help.

Can We Declare the Death of “Personal Truth”?

photograph of dictionary entry fro "truth"

According to Google Trends, the concepts of “your truth” and “speaking your truth” began a noticeable increase around the mid-2010s, likely as a response to the MeToo movement. At the time, the concept of speaking a personal truth was met with controversy. Just a few months ago actress Cate Blanchett ridiculed the concept, and now with the discussion of Prince Harry’s various “personal truths” it seems it’s made its way back in the news again. But surely if the pandemic has taught us anything, it’s that facts do matter and misinformation is a growing problem. Can we finally put an end to a concept that might be more harmful than helpful?

Before we consider the problems with the concept of personal truth and the idea of speaking one’s own truth, we should consider the uses and insights such a concept does provide. It isn’t a surprise that the concept of personal truth took on a new prominence in the wake of MeToo. The concept of personal truth emerged in response to a problem where women were not believed or taken seriously in their reports of sexual harassment and sexual assault, prompting a call for the public to “believe women.” It can be powerful to affirm “your” truth in the face of a skeptical world that refuses to take seriously your account as representing “the” truth. As Garance Franke-Ruta explains, “sometimes you know something is real and happened and is wrong, even if the world says it’s just the way things are.”

Oprah helped popularize the concept when she used it during the Golden Globes ceremony and her example can demonstrate another important aspect of the concept. Oprah had a difficult childhood, living in poverty and being abused by her family, and the notion that she was “destined for greatness” was considered to be “her” truth. Many feel a connection to such “personal truths” as they allow people who are rarely heard to tell their story and connect their individual experiences to systematic issues.

In philosophy, standpoint theory holds that an individual’s perspectives are shaped by their social experiences and that marginalized people have a unique perspective in light of their particular experiences of power relations.

Sandra Harding’s concept of “strong objectivity” holds that by focusing on the perspectives of those who are marginalized from knowledge production, we can produce more objective knowledge. Thus, by focusing on what many might very well call “their truths” (in other words what they claim to be true in contrast to those who are not marginalized) we might achieve greater objectivity.

On the other hand, even if we recognize the value of such experiential accounts and even if we recognize that there is a problem when people who are abused aren’t believed, it still doesn’t mean that there is any such thing as personal or subjective truth. There seems to be a growing attitude that people are entitled to believe whatever they want individually. But “personal truth” is a contradiction in terms. To understand why we can look to John Dewey’s “The Problem of Truth” which investigates truth not only as a logical concept but as a social one as well.

Truth is supposed to be authoritative. If I tell you something is my opinion, nothing follows from that. If, on the other hand, I state that my opinion is true, then the claim takes on an authority that forces others to evaluate for themselves whether they believe it is true or false. As Dewey explains, “The opposite of truth is not error, but lying, the willful misleading of others.” To represent things as they are

is to represent them in ways that maintain a common understanding; to misrepresent them is to injure—whether wilfully or no—the conditions of common understanding …understanding is a social necessity because it is a prerequisite of all community of action.

Dewey’s point is that truth developed as a social concept that became necessary for social groups to function. This is important because truth and accountability go hand in hand. When we represent something as the truth, we are making a public statement. To say that something is true means that the claim we are making can be assessed by anyone who might investigate it (with enough training and resources) – it means others can reproduce one’s results and corroborate one’s findings. Something held merely in private, on the other hand, has no truth value. As Dewey explains,

So far as a person’s way of feeling, observing and imagining and stating are not connected with social consequences, so far as they have no more to do with truth and falsity than his dreams and reveries. A man’s private affairs are his private affairs, and that is all there is to be said of them. Being nobody else’s business, it is absurd to regard them as either true or false.

While figuratively it can be beneficial to talk about personal truths, ethically it is far more problematic. While many may (rightfully) criticize cultural relativism, at least with cultural relativism, you still have public accountability because culture is the benchmark for truth. In the end, “truth” requires verification. We do not get to claim that something is true until it has survived empirical testing from an ever-growing community of fellow knowers. To claim that something is “true” prior to this, based on individual experience alone, is to take something that rightly belongs to the community. It negates the possibility of delusion or poor interpretation since no one gets to question it. Thus, asserting something to be true on one’s own account is anti-social.

If truth is meant to be publicly accessible and if you are expected to be accountable for things you claim to be true in light of this, then the concept of personal or private truth negates this. If something is true, then it is in light of evidence that extends beyond yourself. Thus, if something is true then there is nothing “personal” about it, and if it is merely personal, it can’t be “true.” Figurative language is nice, but people growing up today hearing about “personal” truths in the media are becoming increasingly confused about the nature of truth, evidence, and reasoning.

As we collectively grapple with growing problems like misinformation, polarization, and conspiracy theories, it is hypocritical to both condemn these things while simultaneously encouraging people to embrace their own personal truths. This notion erases the difference between what is true and what is delusional, and fails to recognize “truth” as a properly social and scientific value. It’s high time we let this concept die.

Calibrating Trust Amidst Information Chaos

photograph of Twitter check mark on iphone with Twitter logo in background

It’s been a tumultuous past few months on Twitter. Ever since Elon Musk’s takeover, there have been almost daily news stories about some change to the company or platform, and while there’s no doubt that Musk has his share of fans, many of the changes he’s made have not been well-received. Many of these criticisms have focused on questionable business decisions and almost unfathomable amounts of lost money, but Musk’s reign has also produced a kind of informational chaos that makes it even more difficult to identify good sources of information on Twitter.

For example, one early change that received a lot of attention was the introduction of the “paid blue check mark,” where one could pay for the privilege of having what was previously a feature reserved for notable figures on Twitter. This infamously led to a slew of impersonators creating fake accounts, the most notable being the phony Eli Lilly account that had real-world consequences. In response, changes were made: the paid check system was modified, then re-modified, then color-coded, then the colors changed, and now it’s not clear how the system will work in the future. Additional changes have been proposed, such as a massive increase in the character limits for tweets, although it’s not clear if they will be implemented.  Others have recently made their debut, such as a “view count” that has been added to each tweet, next to “replies,” “retweets,” and “likes.”

It can be difficult to keep up with all the changes. This is not a mere annoyance: since it’s not clear what will happen next, or what some of the symbols on Tweets really represent anymore – such as those aforementioned check marks – it can be difficult for users to find their bearings in order to identify trustworthy sources.

More than a mere cause of confusion, informational chaos presents a real risk of undermining the stability of online indicators that help people evaluate online information.

When evaluating information on social media, people appeal to a range of factors to determine whether they should accept it, for better or for worse. Some of these factors include visible metrics on posts, such as how many times it’s been approved of – be it in the form of a “like” or a “heart” or an “upvote,” etc. – shared, or interacted with in the form of comments, replies, or other measures. This might seem to be a blunt and perhaps ineffective way of evaluating information, but it’s not just that people tend to believe what’s popular: given that in many social media it’s easy to misrepresent oneself and generally just make stuff up, users tend to look to aspects of their social media experience that cannot easily be faked. While it’s of course not impossible to fabricate numbers of likes, retweets, and comments, it is at least more difficult to do so, and so these kinds of markers often serve as quick heuristics to determine if some content is worth engaging with.

There are others. People will use the endorsement of sources they trust when evaluating an unknown source, and the Eli Lilly debacle showed how people used the blue check mark at least as an indicator of authenticity – unsurprisingly, given its original function. Similar markers play the same role on other social media sites – the “verified badge” on Instagram, for example, at least gives users the information that the given account is authentic; although it’s not clear how much “authenticity” translates to “credibility.”

(For something that is so often coveted among the influencers and influencer-wannabes there appears to be surprisingly little research on the actual effects of verification on levels of trust among users: some studies seem to suggest that it makes little to no difference in perceived trustworthiness or engagement, while others suggest the opposite).

In short: the online world is messy, and it can be hard to get one’s bearings when evaluating the information that comes at one constantly on social media.

This is why making sudden changes to even superficial markers of authenticity and credibility can make this problem significantly worse. While people might not be the best at interpreting these markers in the most reliable ways, having them be stable can at the very least allow us to consider how we should respond to them.

It’s not as though this is the first change that’s been made to how people evaluate entries on social media. In late 2021, YouTube removed publicly-visible counts of how many dislikes videos received, a change that arguably made it more difficult to identify spam, off-topic, or otherwise low-quality videos at a glance. While relying on a heuristic like “don’t trust videos with a bunch of dislikes” is not always going to lead you to the best results, having a stable set of indicators can at least help users calibrate their levels of trust.

So, it’s not that users will be unable to adjust to changes to their favorite online platforms. But with numerous changes of uncertain value or longevity comes disorientation. Combined with Musk’s recent unbanning of accounts that were previously deemed problematic, resulting in the overall increase of misinformation being spread around the site, conditions are made even worse for those looking for trustworthy sources of information online.

Ukraine, Digital Sanctions, and Double Effect: A Response

image of Putin profile, origami style

Kenneth Boyd recently wrote a piece on the Prindle Post on whether tech companies, in addition to governments, have an obligation to help Ukraine by way of sanctions. Various tech companies and media platforms, such as TikTok and Facebook, are ready sources of misinformation about the war. This calls into question whether imposing bans on such platforms would prove helpful to deter Putin by raising the costs of the invasion of Ukraine and silencing misinformation. It is no surprise, then, that the digital minister of Ukraine, Mykhailo Fedorov, has approached Apple, Google, Meta, Netflix, and YouTube to block Russia from their services in different capacities. These methods would undoubtedly be less effective than financial sanctions, but the question is an important one: Are tech companies permitted or obligated to intervene?

One of the arguments Kenneth entertains against this position is that there could be side effects on the citizens of Russia who do not support the attack on Ukraine. As such, there are bystanders for whom such a move to ban media platforms would cause damage (how will some people reach their loved ones?). While such sanctions are potentially helpful in the larger picture of deterring Putin from continuing acts of aggression, is the potential cost morally acceptable in this scenario? The answer, if no, is a mark against tech and media companies enacting such sanctions.

I want to make two points. First, this question of permissible costs is equally applicable to any government deciding to put sanctions on Russia. When the EU, Canada, U.K., and the U.S. put economic sanctions on Russia’s central bank and involvement in Swift, for instance, this effectively caused a cash run and is likely the beginning of an inflation issue for Russians. This affects all in Russia, spanning from those in the government to the ‘mere civilians,’ including those protesting. As such, this cost must be addressed in the moral deliberation to execute such an act.

Second, the Doctrine of Double Effect (DDE) helps us see why unintentionally harming bystanders is morally permissible in this scenario (Not, mind you, in the case of innocent bystanders in Ukraine). So long as non-governmental institutions are the kind of entities morally permitted or obligated to respond (a question worth discussing, which Kenneth also raises), DDE applies equally to both the types of institutions in imposing sanctions with possible side effects.

What does the Doctrine of Double Effect maintain? The bumper sticker version is the following from the BBC: “[I]f doing something morally good has a morally bad side-effect, it’s ethically OK to do it providing the bad side-effect wasn’t intended. This is true even if you foresaw that the bad effect would probably happen.”

The name, one might guess, addresses the two effects one action produces. This bumper sticker version has considerable appeal. For instance, killing in self-defense falls under this. DDE is also applicable to certain cases of administering medicine with harmful side effects and explains the difference between suicide and self-sacrifice.

A good litmus question is whether and when a medical doctor is permitted to administer a lethal dose of medicine. It depends on the intentions, of course, but the bumper sticker version doesn’t catch whether the patient must be mildly or severely ill, whether there are other available options, etc.

The examples and litmus question should prime the intuitions for this doctrine. The full version of DDE (which the criterion below roughly follows) maintains that an agent may intentionally perform an action that will bring about an evil side effect(s) so long as the following conditions are simultaneously and entirely satisfied:

  1. The action performed must in itself be morally good or neutral;
  2. The good action and effect(s), and not the evil effect, are intended;
  3. The evil effect cannot be the means to achieve the good effect — the good must be achieved as directly (or more directly) than the evil;
  4. There must be a proportionality between the good and the evil, in which the evil is lesser than or equal to the good, which serves as a good reason for the act in question.

One can easily see how this applies to killing in self-defense. While impermissible to kill someone in cold blood or even kill someone who is plotting your own death, it is morally permissible to kill someone in self-defense. This is the case even if one foresees that the act of defense will require lethal effort.

As is evident, DDE does not justify the death of individuals in Ukraine who are unintentionally killed (say, in a bombing). For the very act of untempered aggression is an immoral act and fails to meet the criterion.

Now, apply this criterion to the question of tech companies who may impose sanctions to achieve a certain good and with it, an evil.

What are the relevant goods and evils? In this case, the good is at least that of deterring Putin from further aggression and stopping misinformation. The bad is the consequences upon locals. For instance, the anti-war protestors in Russia who are communicating their situation, and perhaps the individuals who use these media outlets to secure communication with loved ones.

This type of act hits all four marks: the action is neutral, the good effects are the ones intended (presumably this is the case), the evil effects are not the means of achieving this outcome and are no more direct than the good effects, and the good far outweighs the evil caused by this.

That the evil is equal to or less than the good achieved in this scenario might not seem apparent. But consider how the civilians have other means of reaching loved ones, and how news reporters (not only TikTok and Facebook) are still prominent ways to communicate information. These are both goods. And thankfully, they would not be entirely lost because of such potential sanctions.

As should be clear, the potential bad side effects are not a good reason to refrain from imposing media and tech sanctions on Russia. This is not to say that it is therefore a good reason to impose sanctions. All we have done in this case is see how the respective side effects are not sufficient to deter sanctions and how the action meets all four criteria. And this shows that it is morally permissible.

Russia, Ukraine, and Digital Sanctions

image of Putin profile, origami style

Russian aggression towards Ukraine has prompted many responses across the world, with a number of countries imposing (or at least considering imposing) sanctions against Russia. In the U.S., Joe Biden recently announced a set of financial sanctions that would cut off Russian transactions with U.S. banks, and restrict Russian access to components used in high tech devices and weapons. In Canada, Justin Trudeau also announced various sanctions against Russia, and many Canadian liquor stores stopped selling Russian vodka. While some of these measures will likely be more effective than others – not having access to U.S. banks probably stings a bit more than losing the business of the Newfoundland and Labrador Liquor Corporation – there is good reason for governments to impose sanctions as a way to attempt to deter further aggression from Russia.

It is debatable whether the imposition of sanctions by governments is enough (for example, providing aid to Ukraine in some form also seems like something that governments should do), but it certainly seems like something that they should do. If we accept the view that powerful governments have at least some moral obligation to help keep the peace, then sanctioning Russia is something such governments ought to do.

What about corporations? Do they have any such obligations? Companies are certainly within their rights to stop doing business with Russia, or to cut off services they would normally supply, if they see fit. But do the moral obligations that apply to governments apply to private businesses, as well?

Ukraine’s digital minister Mykhailo Fedorov may think that they do. He recently asked Apple CEO Tim Cook to stop supplying Apple products to Russia, and to cut off Russian access to the app store. “We need your support,” wrote Fedorov, “in 2022, modern technology is perhaps the best answer to the tanks, multiple rocket launchers … and missiles.” Fedorov asked Meta, Google, and Netflix to also stop providing services to Russia, and to ask that Google block YouTube channels that promote Russian propaganda.

It is not surprising why Fedorov singled out tech companies. It has been well-documented that Facebook and YouTube have been major sources of misinformation in the past, and the current conflict between Russian and Ukraine is no exception. There has been a lot said already about how tech companies have obligations to attempt to stem the flow of misinformation on their respective platforms, and in this sense, they clearly have obligations towards Ukraine to make sure that their inactions do not contribute to the proliferation of damaging information.

It is a separate question, though, as to whether a company like Apple ought to suspend its service in Russia as a form of sanction. We can consider arguments on either side.

Consider first an argument in favor: like a lot of other places in the world, many people in Russia rely on the services of companies like Apple, Meta, and Google in their daily lives, as do members of Russia’s government and military. Cutting Russia off from these services would then be disruptive in ways that may be comparable to the sanctions imposed by the governments of other countries (and in some cases could very well be more disruptive). If these companies are in a position to help Ukraine by imposing such digital sanctions, then we might think they ought to.

Indeed, this kind of obligation may stem from a more general obligation to help victims of unjust aggression. For instance, I may have some such obligation: given that I am a moderately well-off Westerner with an interest in global justice, we might think that I should (say) avoid buying Russian products and give money to charities that aid the people of Ukraine. If I were in a position to make a more significant difference – say, if I were the CEO of a large company popular in Russia – we might then think that I should do more, in a way that is proportional to the power and influence I have.

However, we could also think of arguments opposed to the idea that tech companies have obligations to impose digital sanctions. For instance, we might think that corporations are not political entities, and thus have no special obligations when it comes to matters of global politics. This is perhaps a simplistic view of the relationship between corporations and governments; regardless, we still might think that corporations simply aren’t the kinds of things that stand in relationship to governments. These private entities don’t (or shouldn’t) have similar responsibilities to impose sanctions or otherwise help keep the peace.

One might also worry about the effect digital sanctions might have on Russian civilians. For example, lack of access to tech could have collateral damage in the form of preventing groups of protestors from communicating with one another, or from helping debunk propaganda or other forms of misinformation. While many forms of sanctions have indirect impacts on civilians, digital sanctions have immediate and direct impacts that one might think should be avoided.

While some tech companies have already begun taking actions to address misinformation from Russia, whether Fedorov’s request will be granted by tech giants like Apple remains to be seen.

Taking Pleasure at the Ultimate Self-Own?

photograph of Herman Cain

Does Reddit.com’s r/HermanCainAward wrongfully celebrate COVID-19 deaths? To some, the subreddit is a brutal, yet necessary look at the toll of vaccine misinformation and the deaths that follow. To others, it is a cesspit of schadenfreude (taking pleasure in the pain of others) that has few, if any, redeeming qualities.

The description of the popular forum reads: “Nominees have made public declaration of their anti-mask, anti-vax, or COVID-hoax views, followed by admission to hospital for COVID. The Award is granted upon the nominee’s release from their Earthly shackles.”

An average post contains multiple screenshots of social media posts made by someone who expresses anti-vax views followed by screenshots of friends or family members reporting on the person’s sickness with COVID and, often, subsequent death. The victim’s social media posts are usually right-wing and often feature conspiracy theories as well as a set of common memes.

Outside of the nominations, one can find community support posts as well as “IPAs” or “Immunized to Prevent Award” posts, in which users report getting vaccinated after witnessing the horror presented in the forum. There are also “Redemption Awards” for those who change their minds about the vaccine, often as they are dying. (Last fall, the subreddit changed its rules to require that all names and faces of non-public figures be redacted.)

The Herman Cain Award is named after Herman Cain, a Black Republican who ran for president in 2012 and co-chaired “Black Voices for Trump” in the recent election cycle. Cain, who had prior health issues, opposed mask mandates and attended a Trump rally in Tulsa on June 20, 2020, where he was photographed not wearing a mask in a crowd of people not wearing masks. Shortly after, Cain tested positive for COVID and was hospitalized. Cain died from COVID six weeks later at 74 years of age.

To gain a better understanding of the rich, ethical dimensions the subreddit presents, there are a few questions we should ask: What is the narrative of HCA posts, and what feelings do these narratives engender? Do HCA posts, taken as a whole, accurately reflect the world around us?

Let’s start with the narratives. Perhaps the most obvious one is a narrative of righteous comeuppance. HCA nominees and winners have endangered not only themselves but also others, and they have reaped the consequences of their actions. This seems to be the primary lens of HCA viewers, who often make posts venting about the harms of anti-vax sentiments and actions.

This narrative tends to produce a sense of righteousness and stability, along with reassurance of one’s experience of the world and the moral responsibility that nominees bear. This sentiment acts as a counter to gaslighting resulting from widespread denial of the reality of the pandemic, perhaps expressed by close friends and family.

The second narrative lens appropriate for HCA content is tragedy. This is not necessarily distinct from the first lens, but it emphasizes more strongly the unnecessary suffering caused by the pandemic and our collective response to it. This lens, perhaps more than the first, encourages us to see HCA nominees as persons whose lives have value.

Pity might be too much to expect, given that the nominees are facing the consequences of their own actions, but the tragic reading does produce genuine horror at the suffering that could have been prevented. At best, this horror keeps us alive to the value of the lives lost. At worst, it devolves into a numbed-out nihilism, as we can no longer bear the burden of moral harms witnessed. It’s very easy to doom-scroll through r/HCA posts and lose hope at the possibility of change.

The third narrative is less noble than the first two. This is the narrative of the self-own — with motives that are tribal, petty, and wishing ill upon those who purport to make the pandemic worse. It is a narrative we might easily slip into from the first. This variant cares less about what is fair or appropriate and more about being right or superior. We might be especially worried about this, as the subreddit feeds off of other polarized dynamics that arise from tribal divides on the left/right spectrum. This, I believe, is the narrative that has primarily concerned those who have written against r/HermanCainAward, contending that it produces schadenfreude.

But what is troubling here is not merely pleasure in the pain of others but something stronger: pleasure in the death of others (and if not death, then extreme physical distress). Is it ever permissible to take pleasure in the death or pain of others? It seems acceptable to take comfort in knowing someone can no longer do any harm, but a preventable death is a bad thing that we should never see as good.

If we take pleasure in the deaths of others, we must either take up some view on which their death is deserved and proportional to their crimes or else discount the value of that person’s life. Neither is an attractive option. Even if nominees have caused the deaths of others by spreading the virus, it is a strong view to claim that their own deaths are deserved because of their actions. And assuming that their deaths were deserved, it still might seem unsavory to take pleasure in their punishment. But the predominant kind of gratification appears to fall into the category of feeling somehow superior to those who are dying. Self-satisfaction at the downfall of others is rather ugly.

These critiques do not rule out righteous anger or the recognition that r/HCA nominees have flouted moral requirements. But they do require that we not reduce them to faceless, nameless monsters that lose their humanity when relegated to a series of memes. The Reddit.com rule changes actively made this aspect worse, even if they helped to prevent doxing.

Does r/HCA currently represent the reality of vaccine denial? The answer seems to be no. The posts that receive attention on r/HCA are for those who are hospitalized and sometimes die from the virus, but there are other unvaccinated individuals who have relatively mundane experiences of the virus. Yes, the unvaccinated are significantly more likely to die, but r/HCA displays the same kind of data skewing as the programming on The Weather Channel — the most extreme cases are given the most attention.

Is there some version of r/HCA that could preserve its prosocial functions and avoid its morally problematic elements? Perhaps, but it would look drastically different from the current subreddit. First, the subreddit would need to include more representative individual stories that capture the variety of experiences of those living through a pandemic. Second, the people featured would need to be more humanized, with more details about their lives included beyond their online, meme-sharing activities. Third, the community should be reworked so it is not constructed in an us-vs.-them dichotomy, where pro-vaxxers are unequivocally the good guys and anti-vaxxers are unequivocally the bad guys.

Would the subreddit be as popular if it were reconstructed in that way? Probably not. But we might start to see each other as human again.

The Ethical and Epistemic Consequences of Hiding YouTube Dislikes

photograph of computer screen displaying YouTube icon

YouTube recently announced a major change on their platform: while the “like” and “dislike” buttons would remain, viewers would only be able to see how many likes a video had, with the total number of dislikes being viewable only by the creator. The motivation for the change is explained in a video released by YouTube:

Apparently, groups of viewers are targeting a video’s dislike button to drive up the count. Turning it into something like a game with a visible scoreboard. And it’s usually just because they don’t like the creator or what they stand for. That’s a big problem when half of YouTube’s mission is to give everyone a voice.

YouTube thus seems to be trying to protect its creators from certain kinds of harms: not only can it be demoralizing to see that a lot of people have disliked your video, but it can also be particularly distressing if those dislikes have resulted from targeted discrimination.

Some, however, have questioned YouTube’s motives. One potential motive, addressed in the video, is that YouTube is removing the public dislike count in response to some of their own videos being overwhelmingly disliked (namely, the “YouTube Rewind” videos and, ironically, the video announcing the change itself). Others have proposed that the move aims to increase viewership: after all, videos with many more dislikes than likes are probably going to be viewed less often, which means fewer clicks on the platform. Some creators have even posited that the move was made predominantly to protect large corporations, as opposed to small creators: many of the most disliked videos belong to corporations, and since YouTube has an interest in maintaining a good relationship with them, they would also have an interest in restricting people’s ability to see how disliked their content is.

Let’s say, however, that YouTube’s motivations are pure, and that they really are primarily intending to prevent harms by removing the public dislike count on videos. A second criticism has come in the form of the loss of informational value: the number of dislikes on a video can potentially provide the viewer with information about whether the information contained in a video is accurate. The dislike count is, of course, far from a perfect indicator of video quality, because one can dislike for reasons that don’t have to do with the information it contains: again, in instances in which there have been targeted efforts to dislike a video, dislikes won’t tell you whether it’s really a good video or not. On the other hand, there do seem to be many cases in which looking at the dislike count can let you know if you should stay away: videos that are clickbait, misleading, or generally poor quality can often quickly and easily be identified by an unfavorable ratio of likes to dislikes.

A worry, then, is that without this information, one may be more likely to not only waste one’s time by watching low-quality or inaccurate videos, but also that one may be more likely to be exposed to misinformation. For instance, consider the class of clickbait videos prevalent on YouTube, one in which people will make impressive-looking crafts or food through a series of improbable steps. Seeing that a video of this type has received a lot of dislikes helps the viewer contextualize it as something that’s perhaps just for entertainment value, and should not be taken seriously.

Should YouTube continue to hide dislike counts? In addressing this question, we are perhaps facing a conflict in different kinds of values: on the one hand, you have the moral value of protecting small or marginalized creators from targeted dislike campaigns; on the other hand, you have the epistemic disvalue of removing potentially useful information that can help viewers avoid believing misleading information (as well as the practical value of saving people the time and effort of watching unhelpful videos). It can be difficult to try to balance different values: in the case of the removal of public dislike counts, the question becomes whether the moral benefit is strong enough to outweigh the epistemic detriment.

One might think that the epistemic detriments are not, in fact, too significant. In the video released by YouTube, this issue is addressed, if only very briefly: referring to an experiment conducted earlier this year in which public dislike counts were briefly removed from the platform, the spokesperson states that they had considered how dislikes give views “a sense of a video’s worth.” He then states that,

[W]hen the teams looked at the data across millions of viewers and videos in the experiment they didn’t see a noticeable difference in viewership regardless of whether they could see the dislike count or not. In other words, it didn’t really matter if a video had a lot of dislikes or not, they still watched.

At the end of the video, they also stated, “Honestly, I think you’re gonna get used to it pretty quickly and keep in mind other platforms don’t even have a Dislike button.”

These responses, however, are non-sequiturs: whether viewership increased or decreased does not say anything about whether people are able to judge a video’s worth without a public dislike count. Indeed, if anything it reinforces the concern that people will be more likely to consume content that is misleading or of low informational value. That other platforms do not contain dislike buttons is also irrelevant: that other social media platforms do not have a dislike button may very well just mean that it is difficult to evaluate the quality of information present on those platforms. Furthermore, users on platforms such as Twitter have found other ways to express that a given piece of information is of low value, for example by ensuring that a tweet has a high ratio of responses to likes, something that seems much less likely to be effective on a platform like YouTube.

Even if YouTube does, in fact, have the primary motivation of protecting some of its creators from certain kinds of harms, one might wonder whether there are better ways of addressing the issue, given the potential epistemic detriments.

On Journalistic Malpractice

photograph of TV camera in news studio

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In 2005, then-CNN anchor Lou Dobbs reported that the U.S. had suffered over 7,000 cases of leprosy in the previous three years and attributed this to an “invasion of illegal immigrants.” Actually, the U.S. had seen roughly that many leprosy cases over the previous three decades, but Dobbs stubbornly refused to issue a retraction, instead insisting that “If we reported it, it’s a fact.”

In 2020, then-Fox-News anchor Lou Dobbs reported that the results of the election were “eerily reminiscent of what happened with Smartmatic software electronically changing votes in the 2013 presidential election in Venezuela.” Dobbs repeatedly raised questions and amplified conspiracy theories about Donald Trump’s loss, granting guests like Rudy Giuliani considerable airtime to spread misinformation about electoral security.

It’s generally uncontroversial to think that “fake news” is epistemically problematic (insofar as it spreads misinformation) and that it can have serious political consequences (when it deceives citizens and provokes them to act irrationally). Preventing these issues is complicated: any direct governmental regulation of journalists or news agencies, for example, threatens to run afoul of the First Amendment (a fact which has prompted some pundits to suggest rethinking what “free speech” should look like in an “age of disinformation”). To some, technology offers a potential solution as cataloging systems powered by artificial intelligence aim to automate fact-checking practices; to others, such hopes are ill-founded dreams that substitute imaginary technology for individuals’ personal responsibility to develop skills in media literacy.

But would any of these approaches have been able to prevent Lou Dobbs from spreading misinformation in either of the cases mentioned above? Even if a computer program would have tagged the 2005 leprosy story as “inaccurate,” users skeptical of that program itself could easily ignore its recommendations and continue to share the story. Even if some subset of users choose to think critically about Lou Dobbs’ 2020 election claims, those who don’t will continue to spread his conjectures. Forcibly removing Dobbs from the air might seem temporarily effective at stemming the flow of misinformation, but such a move — in addition to being plainly unconstitutional — would likely cause a counter-productive scandal that would only end up granting him even more attention.

Instead, rather than looking externally for ways to stem the tide of fake news and its problems, we might consider solutions internal to the journalistic profession: that is, if we consider journalism as a practice akin to medicine or law, with professional norms dictating how its practitioners ought to behave (even apart from any regulation from the government or society-at-large), then we can criticize “bad journalists” simply for being bad journalists. Questions of epistemic or political consequences of bad journalism are important, but subsequent to the first question focused on professional etiquette and practice.

This is hardly a controversial or innovative claim: although there is no single professional oath that journalists must swear (along the lines of those taken by physicians or lawyers), it is common for journalism schools and employers to promote codes of “journalistic ethics” describing standards for the profession. For example, the Code of Ethics for the Society of Professional Journalists is centered on the principles of accuracy, fairness, harm-minimization, independence, and accountability; the Journalism Code of Practice published by the Fourth Estate (a non-profit journalism watchdog group) is founded on the following three pillars:

  1. reporting the truth,
  2. ensuring transparency, and
  3. serving the community.

So, consider Dobbs’ actions in light of those three points: insofar as his 2005 leprosy story was false, it violates pillar one; because his 2020 election story (repeatedly) sowed dissension among the American public, it fails to abide by pillar three (notably, because it was filled with misinformation, as poignantly demonstrated by the defamation lawsuit Dobbs is currently facing). Even before we consider the socio-epistemic or political consequences of Dobbs’ reporting, these considerations allow us to criticize him simply as a reporter who failed to live up to the standards of his profession.

Philosophically, such an approach highlights the difference between accounts aimed at cultivating a virtuous disposition and those that take more calculative approaches to moral theorizing (like consequentialism or deontology). Whereas the latter are concerned with a person’s actions (insofar as those actions produce consequences or align with the moral law), the former simply focuses on a person’s overall character. Rather than quibbling over whether or not a particular choice is good or bad (and then, perhaps, wondering how to police its expression or mitigate its effects), a virtue theorist will look to how a choice reflects on the holistic picture of an agent’s personality and identity to make ethical judgments about them as a person. Like the famous virtue theorist Aristotle said, “one swallow does not make a summer, nor does one day; and so too one day, or a short time, does not make a man blessed and happy.”

On this view, being “blessed and happy” as a journalist might seem difficult — that is to say, being a good journalist is not an easy thing to be. But Aristotle would likely point out that, whether we like the sound of it or not, this actually seems sensible: it is easy to try and accomplish many things, but actually living a life a virtue — actually being a good person — is a relatively rare feat (hence his voluminous writings on trying to make sense of what virtue is and how to cultivate it in our lives). Professionally speaking, this view underlines the gravity of the journalistic profession: just as being a doctor or a lawyer amounts to shouldering a significant responsibility (for preserving lives and justice, respectively), to become a reporter is to take on the burden of preserving the truth as it spreads throughout our communities. Failing in this responsibility is more significant than failing to perform some other jobs: it amounts to a form of malpractice with serious ethical ramifications, not only for those who depend on the practitioner, but for the practitioner themselves as well.

Will the Real Anthony Bourdain Please Stand Up?

headshot of Anthony Bourdain

Released earlier this month, Roadrunner: A Film About Anthony Bourdain (hereafter referred to as Roadrunner) documents the life of the globetrotting gastronome and author. Rocketing to fame in the 2000’s thanks to his memoir Kitchen Confidential: Adventures in the Culinary Underbelly and subsequent appearances on series such as Top Chef and No Reservations, Bourdain was (in)famous for his raw, personable, and darkly funny outlook. Through his remarkable show Anthony Bourdain: Parts Unknown, the chef did more than introduce viewers to fascinating, delicious, and occasionally stomach-churning meals from around the globe. He used his gastronomic knowledge to connect with others. He reminded viewers of our common humanity through genuine engagement, curiosity, and passion for the people he met and the cultures in which he fully immersed himself. Bourdain tragically died in 2018 while filming Parts Unknown’s twelfth season. Nevertheless, he still garners admiration for his brutal honesty, inquisitiveness regarding the culinary arts, and eagerness to know people, cultures, and himself better.

To craft Roadrunner’s narrative, director Morgan Neville draws from thousands of hours of video and audio footage of Bourdain. As a result, Bourdain’s distinctive accent and stylistic lashings of profanity can be heard throughout the movie as both dialogue and voice-over. It is the latter of these, and precisely three voice-over lines equating to roughly 45-seconds, that are of particular interest. This is because the audio for these three lines is not drawn from pre-existing footage. An AI-generated version of Bourdain’s voice speaks them. In other words, Bourdain never uttered these lines. Instead, he is being mimicked via artificial means.

It’s unclear which three lines these are, although Neville has confirmed one of them, regarding Bourdain’s contemplation on success, appears in the film’s trailer. However, what is clear is that Neville’s use of deepfakes to give Bourdain’s written words life should give us pause for multiple reasons, three of which we’ll touch on here.

Firstly, one cannot escape the feeling of unease regarding the replication and animation of the likeness of individuals who have died, especially when that likeness is so realistic as to be passable. Whether that is using Audrey Hepburn’s image to sell chocolate, generating a hologram of Tupac Shakur to perform onstage, or indeed, having a Bourdain sound-alike read his emails, the idea that we have less control over our likeness, our speech, and actions in death than we did in life feels ghoulish. It’s common to think that the dead should be left in peace, and it could be argued that this use of technology to replicate the deceased’s voice, face, body, or all of the above somehow disturbs that peace in an unseemly and unethical manner.

However, while such a stance may seem intuitive, we don’t often think in these sorts of terms for other artefacts. We typically have no qualms about giving voice to texts written by people who died hundreds or even thousands of years ago. After all, the vast majority of biographies and biographical movies feature dead people. There is very little concern about the representation of those persons on-screen or the page because they are dead. We may have concerns about how they are being represented or whether that representation is faithful (more on these in a bit). But the mere fact that they are no longer with us is typically not a barrier to their likeness being imitated by others.

Thus, while we may feel uneasy about Bourdain’s voice being a synthetic replication, it is not clear why we should have such a feeling merely because he’s deceased. Does his passing really alter the ethics of AI-facilitated vocal recreation, or are we simply injecting our squeamishness about death into a discussion where it doesn’t belong?

Secondly, even if we find no issue with the representation of the dead through AI-assisted means, we may have concerns about the honesty of such work. Or, to put it another way, the potential for deepfake facilitated deception.

The problem of computer-generated images and their impact on social and political systems are well known. However, the use of deepfake techniques in Roadrunner represents something much more personable. The film does not attempt to destabilize governments or promote conspiracy theories. Rather, it tries to tell a story about a unique individual in their voice. But, how this is achieved feels underhanded.

Neville doesn’t make it clear in the film which parts of the audio are genuine or deepfaked. As a result, our faith in the trustworthiness of the entire project is potentially undermined – if the audio’s authenticity is uncertain, can we be safe in assuming the rest of the film is trustworthy?

Indeed, the fact that this technique had been used to create the audio footage was concealed, or at least obfuscated, until Neville was challenged about it during an interview reinforces such skepticism. That’s not to say that the rest of the film must be called into doubt. However, the nature of the product, especially as it is a documentary, requires a contract between the viewer and the filmmaker built upon honesty. We expect, rightly or wrongly, for documentaries to be faithful representations of those things they’re documenting, and there’s a question of whether an AI-generated version of Bourdain’s voice is faithful or not.

Thirdly, even if we accept that the recreation of the voices of the dead is acceptable, and even if we accept that a lack of clarity about when vocal recreations are being used isn’t an issue, we may still want to ask whether what’s being conveyed is an accurate representation of Bourdain’s views and personality. In essence, would Bourdain have said these things in this way?

You may think this isn’t a particular issue for Roadrunner as the AI-generated voice-over isn’t speaking sentences written by Neville. It speaks text which Bourdain himself wrote. For example, the line regarding success featured in the film’s trailer was taken from emails written by Bourdain. Thus, you may think that this isn’t too much of an issue because Neville simply gives a voice to Bourdain’s unspoken words.

However, to take such a stance overlooks how much information – how much meaning – is derivable not from the specific words we use but how we say them. We may have the words Bourdain wrote on the page, but we have no idea how he would have delivered them. The AI algorithm in Roadrunner may be passable, and the technology will likely continue to develop to the point where distinguishing between ‘real’ voices and synthetic ones becomes all but impossible. But such a faithful re-creation would do little to tell us about how lines would be delivered.

Bourdain may ask his friend the question about happiness in a tone that is playful, angry, melancholic, disgusted, or a myriad of other possibilities. We simply have no way of knowing, nor does Neville. By using the AI-deepfake to voice Bourdain, Neville is imbuing meaning into the chef’s words – a meaning which is derived from Neville’s interpretation and the black-box of AI-algorithmic functioning.

Roadrunner is a poignant example of an increasingly ubiquitous problem – how can we trust the world around us given technology’s increasingly convincing fabrications? If we cannot be sure that the words within a documentary, words that sound like they’re being said by one of the most famous chefs of the past twenty years, are genuine, then what else are we justified in doubting? If we can’t trust our own eyes and ears, what can we trust?

What Morgellons Disease Teaches Us about Empathy

photograph of hand lined with ants

For better or for worse, COVID-19 has made conditions ripe for hypochondria. Recent studies show a growing aversion to contagion, even as critics like Derek Thompson decry what he calls “the theater of hygiene,” the soothing but performative (and mostly ineffectual) obsession with sanitizing every surface we touch. Most are, not unjustifiably, terrified of contracting real diseases, but for nearly two decades, a small fraction of Americans have battled an unreal condition with just as much fervor and anxiety as the contemporary hypochondriac. This affliction is known as Morgellons, and it provides a fascinating study in the limits of empathy, epistemology, and modern medical science. How do you treat an illness that does not exist, and is it even ethical to provide treatment, knowing it might entrench your patient further in their delusion?

Those who suffer from Morgellons report a nebulous cluster of symptoms, but the overarching theme is invasion. They describe (and document extensively, often obsessively) colorful fibers and flecks of crystal sprouting from their skin. Others report the sensation of insects or unidentifiable parasites crawling through their body, and some hunt for mysterious lesions only visible beneath a microscope. All of these symptoms are accompanied by extreme emotional distress, which is only exacerbated by the skepticism and even derision of medical professionals.

In 2001, stay-at-home mother Mary Leiato noticed strange growths on her toddler’s mouth. She initially turned to medical professionals for answers, but they couldn’t find anything wrong with the boy, and one eventually suggested that she might be suffering from Munchausen’s-by-proxy. She rejected this diagnosis, and began trawling through historical sources for anything that resembled her son’s condition. Leiato eventually stumbled across 17th-century English doctor and polymath Sir Thomas Browne, who offhandedly describes in a letter to a friend “’that Endemial Distemper of little Children in Languedock, called the Morgellons, wherein they critically break out with harsh hairs on their Backs, which takes off the unquiet Symptoms of the Disease, and delivers them from Coughs and Convulsions.” Leiato published a book on her experiences in 2002, and others who suffered from a similar condition were brought together for the first time. This burgeoning community found a home in online forums and chat rooms. In 2006, the Charles E. Holman foundation, which describes itself as a “grassroots activist organization that supports research, education, diagnosis, and treatment of Morgellons disease,” began hosting in-person conferences for Morgies, as some who suffer from Morgellons affectionately themselves. Joni Mitchell is perhaps the most famous of the afflicted, but it’s difficult to say exactly how many people have this condition.

No peer-reviewed study has been able to conclusively prove the disease is real. When fibers are analyzed, they’re found to be from sweaters and t-shirts. A brief 2015 essay on the treatment of delusional parasitism published by the British Medical Journal notes that Morgellons usually appears at the nexus between mental illness, substance abuse, and other underlying neurological disorders. But that doesn’t necessarily mean the ailment isn’t “real.” When we call a disease real, we mean that it has an identifiable biological cause, usually a parasite or bacterium, something that will show up in blood tests and X-rays. Mental illness is far more difficult to prove than a parasitic infestation, but no less real for that.

In a 2010 book on culturally-specific mental illness, Ethan Watt interviewed medical anthropologist Janet Hunter Jenkins, who explained to him that “a culture provides its members with an available repertoire of affective and behavioural responses to the human condition, including illness.” For example, Victorian women suffering from “female hysteria” exhibited symptoms like fainting, increased sexual desire, and anxiety because those symptoms indicated distress in a way that made their pain legible to culturally-legitimated medical institutions. This does not mean mental illness is a conscious performance that we can stop at any time; it’s more of a cipherous language that the unconscious mind uses to outwardly manifest distress.

What suffering does Morgellons make manifest? We might say that the condition indicates a fear of losing bodily autonomy, or a perceived porous boundary between self and other. Those who experience substance abuse often feel like their body is not their own, which further solidifies the link between Morgellons and addiction. Of course, one can interpret these fibers and crystals to death, and this kind of analysis can only take us so far; it may not be helpful to those actually suffering. Regardless of what they mean, the emergence of strange foreign objects from the skin is often experienced as a relief. In her deeply empathetic essay on Morgellons, writer Leslie Jamison explains in Sir Thomas Browne account, outward signs of Morgellons were a boon to the afflicted. “Physical symptoms,” Jamison says, “can offer their own form of relief—they make suffering visible.” Morgellons provides physical proof of that something is wrong without forcing the afflicted to view themselves as mentally ill, which is perhaps why some cling so tenaciously to the label.

Medical literature has attempted to grapple with this deeply-rooted sense of identification. The 2015 essay from the British Medical Journal recommends recruiting the patient’s friends and family to create a treatment plan. It also advises doctors not to validate or completely dispel their patient’s delusion, and provides brief scripts that accomplish that end. In short, they must “acknowledge that the patient has the right to have a different opinion to you, but also that he or she shall acknowledge that you have the same right.” This essay makes evident the difficulties doctors face when they encounter Morgellons, but its emphasis on empathy is important to highlight.

In many ways, the story of Morgellons runs parallel to the rise of the anti-vaccination movement. Both groups were spear-headed by mothers with a deep distrust of medical professionals, both have fostered a sense of community and shared identity amongst the afflicted, and both legitimate themselves through faux-scientific conferences. The issue of bodily autonomy is at the heart of each movement, as well as an epistemic challenge to medical science. And of course, both movements have attracted charlatans and snake-oil salesmen, looking to make a cheap buck off expensive magnetic bracelets and other high-tech panaceas. While the anti-vaxx movement is by far the most visible and dangerous of the two, these movements test the limits of our empathy. We can acknowledge that people (especially from minority communities, who have historically been mistreated by the medical establishment) have good reason to mistrust doctors, and try to acknowledge their pain while also embracing medical science. Ultimately, the story of Morgellons may provide a valuable roadmap for doctors attempting to combat vaccine misinformation.

As Jamison says, Morgellons disease forces us to ask “what kinds of reality are considered prerequisites for compassion. It’s about this strange sympathetic limbo: Is it wrong to speak of empathy when you trust the fact of suffering but not the source?” These are worthwhile questions for those within and without the medical profession, as we all inevitably bump up against other realities that differ from our own.

Facebook Groups and Responsibility

image of Facebook's masthead displayed on computer screen

After the Capitol riot in January, many looked to the role that social media played in the organization of the event. A good amount of blame has been directed at Facebook groups: such groups have often been the target of those looking to spread misinformation as there is little oversight within them. Furthermore, if set to “private,” these groups run an especially high risk of becoming echo chambers, as there is much less opportunity for information to flow freely within them. Algorithms that Facebook uses to populate your feed were also part of the problem: more popular groups are more likely to be recommended to others, which led to some of the more pernicious groups getting a much broader range of influence than they would have otherwise. As noted recently in the Wall Street Journal, while it was not long ago that Facebook saw groups as the heart of the platform, abuses of the feature has forced the company to make some significant changes into how they are run.

The spread of misinformation in Facebook groups is a complex and serious problem. Some proposals have been made to try to ameliorate it: Facebook itself implemented a new policy in which groups that were the biggest troublemakers – civics groups and health groups – would not be promoted during the first three weeks of their existence. Others have called for more aggressive proposals. For instance, a recent article in Wired suggested that:

“To mitigate these problems, Facebook should radically increase transparency around the ownership, management, and membership of groups. Yes, privacy was the point, but users need the tools to understand the provenance of the information they consume.”

A worry with Facebook groups, as well as a lot of communication online generally, is that it can be difficult to tell what the source of information is, as one might post information anonymously or under the guise of a username. Perhaps with more information about who was in charge of a group, then, one would be able to make a better decision as to whether to accept the information that one finds within it.

Are you part of the problem? If you’re actively infiltrating groups with the intent of spreading misinformation, or building bot armies to game Facebook’s recommendation system, then the answer is clearly yes. I’m guessing that you, gentle reader, don’t fall into that category. But perhaps you are a member of a group in which you’ve seen misinformation swirling about, even though you yourself didn’t post it. What is the extent of your responsibility if you’re part of a group that spreads misinformation?

Here’s one answer: you are not responsible at all. After all, if you didn’t post it, then you’re not responsible for what it says, or if anyone else believes it. For example, let’s say you’re interested in local healthy food options, and join the Healthy Food News Facebook group (this is not a real group, as far as I know). You might then come across some helpful tips and recipes, but also may come across people sharing their views that new COVID-19 vaccines contain dangerous chemicals that mutate your DNA (they don’t). This might not be interesting to you, and you might think that it’s bunk, but you didn’t post it, so it’s not your problem.

This is a tempting answer, but I think it’s not quite right. The reason is because of how Facebook groups work, and how people are inclined to find information plausible online. As noted above, sites like Facebook employ various algorithms to determine which information to recommend to its users. A big factor that goes into such suggestions is how popular a topic or group is: the more engagement a post gets, the more likely it’s going to show up in your news feed, and the more popular a group is, the more likely it will be recommended to others. What this means is that mere membership in such a group will contribute to that group’s popularity, and thus potentially to the spread of the misinformation it contains.

Small actions within such a group can also have potentially much bigger effects. For instance, in many cases we put little thought into “liking” or reacting positively to a post: perhaps we read it quickly and it coheres with our worldview, so we click a thumbs-up, and don’t really give it much thought afterwards. From our point of view, liking a post does not mean that we wholeheartedly believe it, and it seems that there is a big difference between liking something and posting it yourself. However, these kinds of engagements influence the extent to which that post will be seen by others, and so if you’re not liking in a conscientious way, you may end up contributing to the spread of bad information.

What does this say about your responsibilities as a member of a Facebook group? There are no doubt many such groups that are completely innocuous, where people do, in fact, only share helpful recipes or perhaps even discuss political issues in a calm and reasoned way. So it’s not as though you necessarily have an obligation to quit all of your Facebook groups, or to get off the platform altogether. However, given that otherwise innocent actions like clicking “like” on a post can have much worse effects in groups in which misinformation is shared, and that being a member of such a group at all can contribute to its popularity and thus the extent to which it can be suggested to others means that if you find yourself a member of such a group, you should leave it.

The Cost of Free Speech

cartoon image of excited speech bubble

As 2021 got underway, and the United States was dealing with the fallout from the January 6 insurrection, a much smaller-scale political controversy was blowing through Australia’s sweltering summer. The prime minister was on holiday, his deputy Michael McCormack was in charge, and Craig Kelly, an outspoken member of the leading party who is a notorious climate skeptic, alternative COVID-19 treatment theorist, and vaccine doubter, had a hold of the mic and was getting plenty of attention proffering conspiracy-style views on his social media accounts.

Australia has done exceptionally well in keeping the global coronavirus pandemic at bay with strict lockdowns in response to outbreaks, effective contact tracing, and strict quarantine rules for all international arrivals. The country of 25 million, has recorded fewer than 1,000 deaths since the pandemic hit last March. Though the community is generally willing to comply with expert public health advice, there has been some dissent from conspiracy theorists and anti-vaxxers.

As Australia began preparing to roll out its COVID-19 vaccination program, Craig Kelly, that zealous critic of scientific evidence, was hard at work on his personal Facebook page posting in favor of unproven treatments and against vaccines and other public health measures, such as the wearing of masks.

Kelly has a large social media following, and public health officials in Australia, including the Australian Medical Association and the chief medical officer, pushed back hard, expressing concern that his views pose a danger to public health, and calling on senior government figures – the acting Prime Minister Michael McCormack and the Health Minister Greg Hunt – to condemn those views and rebuke Kelly. But no rebuke came. Instead, McCormack had this to say:

“Facts are sometimes contentious and what you might think is right – somebody else might think is completely untrue – that is part of living in a democratic country… I don’t think we should have that sort of censorship in our society.”

Notice how familiar this type of response is becoming: when politicians or pundits are called out for expressing views that are misleading, offensive or wrong, there is a tendency to claim a free speech defense. Notice too that McCormack makes specific reference here to what living in a democratic country involves. It is of course true that democratic legitimacy is one of the functions of free speech, but does free speech include freedom to lie, confabulate, or spread misinformation? And how do these things affect democracy? Can we untangle freedom of speech, as a fundamentally necessary democratic principle, from demagoguery?

Let’s look in a bit more detail at McCormack’s statement, which is problematic for a number of reasons, but namely in invoking freedom of speech in defense of views which ought to be rejected because they are wrong, harmful, and generally indefensible. This is a sly move, given the high importance citizens of free, democratic countries place on the right to free speech. It is also a tactic which often has little to do with defending this important right and more to do with evading a subject or shutting down an argument – contra free speech.

As a point of logic, rebuking Kelly for proffering dangerous falsehoods is not censorship. If McCormack’s assertion is that Kelly is free to make these claims then, on that argument, McCormack is free to condemn them.

Furthermore, McCormack’s assertion that facts are contentious appears to imply an ‘everyone is entitled to their own opinion’ kind of defense, which bears a strong resemblance to the free speech defense. But it simply isn’t right. In matters of fact, for example matters of science, as opposed to matters of taste, you are not entitled to your opinion; you are entitled to what you can make a case for, and what you can support through reasoned argument, true premises, and solid inferences. You are not entitled to an opinion that is demonstrably false. Both logic and good faith hold you to a standard which requires you to recognize when a belief is indefensible. Democratic legitimacy depends as much on that as it does on freedom of speech.

Following McCormack’s comments, as public and medical professional pushback grew, no senior member of Kelly’s government – not the Federal Health Minister, nor the Prime Minister himself (now back from his holiday) would bring Kelly into line. Finally, it was Facebook whose moderators intervened and Kelly was required to remove one post proffering COVID-19 misinformation and conspiracy-style rhetoric. Kelly did so, saying: “I have since removed the post… under protest.” He then gave this ominous pronouncement: “We have entered a very dark time in human history when scientific debate and freedom of speech is being suppressed.”

Perhaps Kelly is right that we have ‘entered a dark time in human history’ (if the present can be said to be history) – but not for the reasons he thinks. When we see the right of free speech being used again and again to evade responsibility and excuse lies and falsehoods, it is time to take stock, and look closely at what is at stake in our fundamental beliefs about freedom, democracy, and truth.

One reason this use of the free speech defense is so pernicious, is that most people living in open, democratic societies will agree on the importance of free speech and hold it in high regard. This invocation of freedom of speech seems to trade on the hearer not noticing that something they value highly is being used to degrade other things of value.

International law recognizes and protects the right to freedom of speech which is enshrined into the UN Declaration of Human Rights, as stated in Article 19. The antithesis of freedom of speech is censorship. Censorship is the intolerance of opposing views. This happens, politically, where the establishment fears or dislikes opposition, or where governments want to suppress information about their activities.

Democratic legitimacy is one of the most important functions of free speech. And free speech is one of the most important mechanisms of democratic legitimacy. Real democratic engagement requires the free exchange of ideas, where forms of dissent are not censored, and where differing or opposing views can be aired, discussed, and considered. In this way the citizenry can be engaged, well-informed, and part of the political process.

Even though the argument from democratic legitimacy holds free speech in high regard, very few people take an absolutist position on freedom of speech. Free speech does not imply a free-for-all. Therefore, protection of free speech always involves judgments on when and why speech might justifiably be regulated or curtailed. The answer to the question of what kind of speech causes harm and is justifiably restricted hinges on the extent to which freedom of speech is valued in itself. In liberal societies its intrinsic value is usually held to be high. If freedom of speech is curtailed, its limits will be decided around the protection of other, countervailing values, like human dignity and equality. In this sense there is a (sometimes unacknowledged) weighing-up of the value of freedom of speech relative to other values. If freedom of speech is, in itself, very highly valued, then other values may be subordinated. It is upon this scale that the right to freedom of speech is, for some, synonymous with the right to give offense.

A quick internet search of “free speech quotes” is instructive here, serving up such ideas as: “free speech is meaningless unless it tolerates the speech that we hate,” from Henry Hyde; “Free speech is meant to protect unpopular speech. Popular speech by definition, needs no protection,” from Neil Boortz; and “Freedom of Speech includes the freedom to offend,” courtesy of Brad Thor. Add to these offerings, the infamous contribution of Senator George Brandis, Australia’s erstwhile attorney general, who, in 2014 while making an argument for winding back Australia’s anti-racial discrimination laws, put it to the parliament that ”People do have a right to be bigots, you know.”

All this illustrates which values go down in ranking when free speech goes up. If we take freedom of speech to protect or our right to be bigots, that points to something we value. That is, it suggests, we value our right to be bigots more than we value equality or human dignity; that we would prefer to be allowed to vilify than to protect people from vilification.

Perhaps we will decide that we do have a right, by virtue of to the right to freedom of speech, to be bigots. If that is so, it certainly sheds light on the ethical problems that can arise from constructing our basic moral bearings around defending our rights at the expense of other ways of thinking about what is important in our moral lives. Perhaps we might orient our ethical thinking more towards questions about what we owe one another morally rather than what we can lay claim to. We might, for example, ask ourselves whether, rather than uncritically digging in about our rights, it would be better to reflect on our values in this space.

It comes back to the question of why freedom of speech is so important. If free speech, according to the democratic legitimacy argument, is so important because it allows us to better hold power to account, allows citizens to make informed decisions and engage in reasoned, open debate, then it does not make sense to defend or promote speech which itself undermines these goals — speech like Craig Kelly’s COVID-19 misinformation posts, or any picking from the multiverse of conspiracy theories currently working their way into the marrow of certain sections of society. Americans have recently experienced the very hard consequences of lies and misinformation on democratic society in the twin crises of the January 6 insurrection and the runaway COVID-19 pandemic.

In conclusion, we don’t seem to be paying close enough attention to the way that freedom of speech is being used to justify lies and to push back against demands for accountability from the powerful and privileged. If we can untangle freedom of speech as a fundamentally necessary democratic principle from demagoguery, we must do so by directing more critical attention to how it is invoked and what is at stake when freedom of speech is taken to mean freedom to lie or to further a pernicious ideology. Yes, freedom of speech is fundamentally important, and we should protect it because of its central role in the democratic process. At the same time, truth matters and lies have real consequences. When we stand up for freedom of speech, we should be thinking broadly in terms of why it is valuable, what role it serves, and what our responsibilities are in respect of each other. A broader discussion about our values will serve us better than a narrow focus on rights, no matter what they cost us.

Accountability, Negligence, and Bad Faith

photograph looking up at US Capitol Building

The wheels of justice are turning. As I write this, there are a number of movements afoot — from D.C. police continuing to arrest agitators and insurrectionists on possible sedition charges to Representative Ilhan Omar drawing up articles of impeachment — designed to separate the guilty from the guiltier and assign blame in appropriate proportions. And there is a great deal of blame to go around. Starting with the president’s inciting words just blocks away to the mob he steered to breach the Capitol intending to effect their political will, these are culpable parties. But we might consider others. Those members of Congress, like Senators Josh Hawley and Ted Cruz, willing to lend the considerable credibility of their office to unsupported (deunked and repeatedly dismissed) accusations of a stolen election, surely share some portion of the blame. To hold these parties to account, Representative Cori Bush is introducing legislation to investigate and potentially remove those members of Congress responsible for “inciting this domestic terror attack.” In the meantime, the calls for Senators Cruz and Hawley to resign are only growing louder.

But what are these lawmakers really guilty of? On what grounds could these public, elected officials possibly be threatened with removal from office? To hear them tell it, they were merely responding to the concerns of their constituents who remain convinced that the election was stolen, robbing them of their God-given right to be self-governing. They are then not enemies of democracy, but its last true defenders.

Nevermind that people’s belief in election malfeasance is not evidence of election malfeasance (especially when that belief is the product of misinformation disseminated by the very same “defenders”), this explanation fails to appreciate the design of representative democracy. Ours is not a direct democracy; citizens are not called upon to deliver their own preferences on each individual question of policy. Instead, we elect public servants that might better represent our collective interests than any one individual might herself. The hope is that this one representative might be better positioned than the average citizen to engage in the business of governing. Rather than pursuing any and all of their constituents’ interests come what may, these lawmakers are tasked with balancing these competing interests against fealty to the republic, the Constitution, and the rule of law. In the end, these officials are people who can, and should, know better. As Senator Mitt Romney argued Wednesday, “The best way we can show respect for the voters who are upset is by telling them the truth.” That there is no evidence that the results of the presidential election are in error, and that Joe Biden won the election. “That is the burden, and the duty, of leadership.”

Perhaps, then, these legislators were merely negligent, inadequately discharging their duties of office and ultimately unable to anticipate the outcome of things beyond their control. (Who could have predicted that paying lip service to various conspiracy theories would be enough to give them the weight of reality?) And so when words finally became deeds, the violence displayed at the Capitol was enough to make several Congressmembers reconsider their position. It was fine to continue to throw sand in the gears as a political statement, but now faced with such obvious and violent consequences (as well as the attending political blowback) even Senator Lindsey Graham was willing to say “enough is enough.

But negligence is a slippery thing to pin down; it rests on a contradiction: that one can simultaneously be instrumental yet removed, responsible but unaware. Many might agree that these lawmakers’ actions betray a failure to exercise due care. These senators and representatives underestimated risk, ignored unwanted or unintended consequences, and failed to appreciate the cultural, societal, and political moment. But establishing that these members of Congress acted negligently would require demonstrating that any other reasonable person placed in their shoes would have recognized the possible danger. (And “reasonableness” has proven notoriously difficult to define.)

For these reasons, demonstrating negligence would seem a tall order, but this charge also doesn’t quite fit the deed. The true criticism of these lawmakers’ actions has to do with intention, not merely the consequence. Many of these public officials not only failed to take due care in discharging their duties of office and serving the public’s interests, but were also acting in bad faith when doing so. Theirs was not merely a dereliction of duty, but a failure borne of dishonest dealings and duplicitous intent. The move to object to the Electoral College certification, for example, was never intended to succeed. Even Senate Majority Leader Mitch McConnell was willing to condemn the cowardice and self-serving aggrandizement involved in making a “harmless protest gesture while relying on others to do the right thing.” Similarly, the vote led Senator Mitt Romney to question whether these politicians might “weigh [their] own political fortunes more heavily than [they] weigh the strength of our republic, the strength of our democracy, and the cause of freedom.”

In the end, the use made of folks’ willingness to believe — to believe in a deep-state plot and broad-daylight power grab — all for private political gain, pushes us past a simple charge of negligence. The game these politicians were playing undermines any claim to be caught unawares. The fault lies with choice, not ignorance. A calculated gamble was made — to try to gain political points, retain voter support, and fill the re-election coffers by continuing to cast doubt on the election results and build on some constituents’ wildest hopes. The problem isn’t merely with the outcome, it’s with the willingness to trust that private gain outweighs public cost. But as Senator Romney asks, “What’s the weight of personal acclaim compared to the weight of conscience?”

As it stands, there are far too many guilty parties, and not enough blame to go around.

Duties to Vaccinate, Duties to Inform

image of 2021 with vaccine vial and syringe representing two of the numbers

The news these days has been dominated by information about the development of a vaccine for COVID-19, something that has felt like the first really good bit of news pertaining to the pandemic since it started. While there is reason for optimism, however, it is not as though the deployment of a vaccine will end the pandemic overnight: in addition to logistical problems of production and distribution, recent research suggests that it may still be possible that vaccinated individuals could spread the disease, even if they themselves will not contract it. As such, it’s not as though we can all just throw our masks in the garbage and start going to music festivals the day the vaccines start to roll out. This is not to say that things won’t get better, but that it might take a while.

You would think that the development of a vaccine would be universally regarded as good news, and that pretty much everyone would want to get vaccinated. However, when surveyed, large portions of the US population have responded that they would be hesitant to receive a vaccine, or else would outright refuse it. These numbers have varied over the months: according to the PEW research center, in May 27% said they would “probably not” or “definitely not” get the vaccine, while that number increased to 49% in September, before going back down to 39% in November. It’s not clear whether these numbers will change as more information becomes available, however; similarly, when people actually start receiving the vaccine and seeing that it’s not dangerous one might expect these numbers to go down.

Reasons for current levels of skepticism vary: while much has been made about the wildest conspiracy theories floating around Facebook – Bill Gates is trying to mind control you, or something – it seems more likely that the majority of skeptics are driven more by concerns about making the best decisions given limited information, combined perhaps with a distrust of medical experts. The question then becomes how we can best communicate scientific information to those who are skeptical. Indeed, this is a problem that we have been facing since the pandemic started: first it was information regarding the need for social distancing, then for wearing masks, and now for getting vaccinated. While at no point have we found the magic solution, it is worth considering what our roles in this process should be.

I think we have a certain obligation in this regard: beyond getting the vaccine itself, we also ought to try to inform others as best we can.

Here’s why I think this. Part of the problem in communicating information to a lot of skeptical people is that it will be difficult to find sources of information that everyone finds trustworthy. To try to address this concern, former presidents Barack Obama, George W. Bush, and Bill Clinton have stated that they would all receive the vaccine on camera to show that it is safe, with the goal of appealing to as politically diverse a population as possible. Given that a number of issues surrounding COVID-19 have become politicized, this seems like a good strategy: if those on one side of the political spectrum are less likely to trust someone from the other side, then having representatives of both sides together to present a unified message may help convince a larger audience.

(Other campaigns seem less promising: Trump, for instance, reportedly attempted to develop videos to be played on YouTube promoting the vaccine using only celebrities that were not critical of Trump or some of the causes that he does not support, such as having voted for Obama in the past or being in favor of gay rights. The number of people who met these criteria turned out to be very short.)

While trust can be affected by one’s general political position, there are additional divisions that may affect who one deems trustworthy. This can be seen in recent polls measuring Americans’ willingness to receive the vaccines that target more specific demographics. For instance, some have expressed concern that Black Americans may be particularly prone to skepticism regarding the vaccine, prompting members of various Black communities to attempt to communicate the importance of getting vaccinated. In an even more specific study, one recent poll reported that over half of New York City firefighters would refuse a vaccine. Here union leaders seem to be going in the wrong direction, stating that they would not require first respondents to be vaccinated, and that they would respect the decisions of their members.

We can see, then, that while major figures like former U.S. presidents may be seen as trustworthy sources, there is also a role for less prominent individuals to convey information to skeptical individuals. Given the importance of having as many people receive the vaccine as possible, the duty to try to inform others extends, I think, to pretty much everyone: while not everyone is a community leader, one may nevertheless be considered a trustworthy source of information by one’s friends and family, and may be able to communicate such information more effectively than former presidents or celebrities, given that one may share more values with those one is close to. When it comes to the COVID vaccine, then, one’s obligations may extend beyond just getting the vaccine oneself, and may include duties to help inform others.

Ethical Considerations of Deepfakes

computer image of two identical face scans

In a recent interview for MIT Technology Review, art activist Barnaby Francis, creator of deepfake Instagram account @bill_posters_uk, mused that deepfake is “the perfect art form for these kinds of absurdist, almost surrealist times that we’re experiencing.” Francis’ use of deepfakes to mimic celebrities and political leaders on Instagram is aimed at raising awareness about the danger of deepfakes and the fact that “there’s a lot of people getting onto the bandwagon who are not really ethically or morally bothered about who their clients are, where this may appear, and in what form.” While deepfake technology has received alarmist media attention in the past few years, Francis is correct in his assertion that there are many researchers, businesses, and academics who are pining for the development of more realistic deepfakes.

Is deepfake technology ethical? If not, what makes it wrong? And who holds the responsibility to prevent the potential harms generated by deepfakes: developers or regulators?

Deepfakes are not new. The first mention of deepfake was by a reddit user in 2017, who began using the technology to create pornographic videos. However, the technology soon expanded to video games as a way to create images of people within a virtual universe. However, the deepfake trend suddenly turned toward more global agendas, with fake images and videos of public figures and political leaders being distributed en masse. One altered video of Joe Biden was so convincing that even President Trump fell for it. Last year, there was a deepfake video of Mark Zuckerberg talking about how happy he was to have thousands of people’s data. At the time, Facebook maintained that deepfake videos would stay up, as they did not violate their terms of agreement. Deepfakes have only increased since then. In fact, there exists an entire YouTube playlist with deepfake videos dedicated to President Trump.

In 2020, those who have contributed to deepfake technology are not only individuals in the far corners of the internet. Researchers at the University of Washington have also developed deepfakes using algorithms in order to combat their spread. Deepfake technology has been used to bring art to life, recreate the voices of historical figures, and to use celebrities’ likeness to communicate powerful public health messages. While the dangers of deepfakes have been described by some as dystopian, the methods behind their creation have been relatively transparent and accessible.

One problem with deepfakes are that they mimic a person’s likeness without their permission. The original Deepfakes, which used photos or videos of a person mixed with pornography uses a person’s likeness for sexual gratification. Such use of a person’s likeness might never personally affect them, but could still be considered wrong, since they are being used as a source of pleasure and entertainment, without consent. These examples might seem far-fetched, but in 2019 a now-defunct app called DeepNude, sought to do exactly that. Even worse than using someone’s likeness without their knowledge, is if the use of their likeness is intended to reach them and others, in order to humiliate or damage their reputation. One could see the possibility of a type of deepfake revenge-porn, where scorned partners attempt to humiliate their exes by creating deepfake pornography. This issue is incredibly pressing and might be more prevalent than the other potential harms of deepfakes. One study, for example, found that 96% of existing deepfakes take the form of pornography.

Despite this current reality, much of the moral concern over deepfakes is grounded in their potential to easily spread misinformation. Criticism around deepfakes in recent years has been mainly surrounding their potential for manipulating the public to achieve political ends. It is becoming increasingly easy to spread a fake video depicting a politician who is clearly incompetent or spreading a questionable message, which might detract from their base. On a more local level, deepfakes could be used to discredit individuals. One could imagine a world in which deepfakes are used to frame someone in order to damage their reputation, or even to suggest they have committed a crime. Video and photo evidence is commonly used in our civil and criminal justice system, and the ability to manipulate videos or images of a person, undetected, arguably poses a grave danger to a justice system which relies on our sense of sight and observation to establish objective fact. Perhaps even worse than framing the innocent could be failing to convict the guilty. In fact, a recent study in the journal Crime Science found that deepfakes pose a serious crime threat when it comes to audio and video impersonation and blackmail. What if a deepfake is used to replace a bad actor with a person who does not exist? Or gives plausible deniability to someone who claims that a video or image of them has been altered?

Deepfakes are also inherently dishonest. Two of the most popular social media networks, Instagram and TikTok, inherently rely upon visual media which could be subject to alteration by self-imposed deepfakes. Even if a person’s likeness is being manipulated with their consent and also could have positive consequences, it still might be considered wrong due to the dishonest nature of its content. Instagram in particular has been increasingly flooded with photoshopped images, as there is an entire app market that exists solely for editing photos of oneself, usually to appear more attractive. The morality of editing one’s photos has been hotly contested amongst users and between feminists. Deepfakes only stand to increase the amount of media that is self-edited and the moral debates that come along with putting altered media of oneself on the internet.

Proponents of deepfakes argue that their positive potential far outweighs the negative. Deepfake technology has been used to spark engagement with the arts and culture, and even to bring historical figures back to life, both for educational and entertainment purposes. Deepfakes also hold the potential to integrate AI into our lives in a more humanizing and personal manner. Others, who are aware of the possible negative consequences of deepfakes, argue that the development and research of this technology should not be impeded, as the advancement of the technology can also contribute to research methods of spotting it. And there is some evidence backing up this argument, as the development of deepfake progresses, so do the methods for detecting it. It is not the moral responsibility of those researching deepfake technology to stop, but rather the role of policymakers to ensure the types of harmful consequences mentioned above do not wreak havoc on the public. At the same time, proponents such as David Greene, of the Electronic Frontier Foundation, argue that too stringent limits on deepfake research and technology will “implicate the First Amendment.”

Perhaps then it is not the government nor deepfake creators who are responsible for their harmful consequences, but rather the platforms which make these consequences possible. Proponents might argue that the power of deepfakes is not necessarily from their ability to deceive one individual, but rather the media platforms on which they are allowed to spread. In an interview with Digital Trends, the creator of Ctrl Shift Face (a popular deepfake YouTube channel), contended that “If there ever will be a harmful deepfake, Facebook is the place where it will spread.” While this shift in responsibility might be appealing, detractors might ask how practical it truly is. Even websites that have tried to regulate deepfakes are having trouble doing so. Popular pornography website, PornHub, has banned deepfake videos, but still cannot fully regulate them. In 2019, a deepfake video of Ariana Grande was watched 9 million times before it was taken down.

In December, the first federal regulation pertaining to deepfakes passed through the House, the Senate, and was signed into law by President Trump. While increased government intervention to prevent the negative consequences of deepfakes will be celebrated by some, researchers and creators will undoubtedly push back on these efforts. Deepfakes are certainly not going anywhere for now, but it remains to be seen if the potentially responsible actors will work to ensure their consequences remain net-positive.

Causality and the Coronavirus

image of map of US displayed as multi-colored bar graph

“Causality” is a difficult concept, yet beliefs about causes are often consequential. A troubling illustration of this is the claim, which is being widely shared on social media, that the coronavirus is not particularly lethal, as only 6% of the 190,000+ deaths attributed to the virus are “caused” by the disease.

We tend to think of causes in too-simplistic terms

Of all of the biases and limitations of human reasoning, our tendency to simplify causes is arguably one of the most fundamental. Consider the hypothetical case of a plane crash in Somalia in 2018. We might accept as plausible causes things such as the pilot’s lack of experience (say it was her first solo flight), the (old) age of the plane, the (stormy) weather, and/or Somalia’s then-status as a failed state, with poor infrastructure and, perhaps, an inadequate air traffic control system.

For most, if not all, phenomena that unfold at a human scale, a multiplicity of “causes” can be identified. This includes, for example, social stories of love and friendship and political events such as wars and contested elections.1

Causation in medicine

Causal explanations in medicine are similarly complex. Indeed, the CDC explicitly notes that causes of death are medical opinions. These opinions are likely to include not only an immediate cause (“final disease or condition resulting in death”), but also an underlying cause (“disease or injury that initiated the events resulting in death”), as well as other significant conditions which are or are not judged to contribute to the underlying cause of death.

In any given case, the opinions expressed on the death certificate might be called into question. Even though these opinions are typically based on years of clinical experience and medical study, they are limited by medical uncertainty and, like all human judgments, human fallibility.

When should COVID count as a cause?

Although the validity of any individual diagnosis might be called into question, aggregate trends are less equivocal. Consider this graph from the CDC which identifies the number of actual deaths not attributed to COVID-19 (green), additional deaths which have been attributed to COVID-19 (blue), and the upper bound of the expected number of deaths based on historical data (orange trend line). Above the blue lines there are pluses to indicate weeks in which the total number (including COVID) exceeds the reported number by a statistically significant margin. This has been true for every week since March 28. In addition, there are pluses above the green lines indicating where the number of deaths excluding COVID was significantly greater than expected. This is true for each of the last eight weeks (ignoring correlated error, we would expect such a finding fewer than one in a million times by chance). This indicates that the number of deaths due to COVID in America has been underreported, not overreported.

Among the likely causes for these ‘non-COVID’ excess deaths, we can point, particularly early in the pandemic, to a lack of familiarity with, and testing for, the virus among medical professionals. As the pandemic unfolded, it is likely that additional deaths can be attributed, in part to indirect causal relationships such as people delaying needed visits to doctors and hospitals out of fear, and the social, psychological, and economic consequences that have accompanied COVID in America. Regardless, the bottom line is clear: without COVID-19, over two hundred thousand other Americans would still be alive today. The pandemic has illuminated, tragically, our interconnectedness and with it our
responsibilities to each other. One part of this responsibility is to deprive the virus of the
opportunity to spread by wearing masks and socially distancing. But this is not enough: we
need to stop the spread of misinformation as well.

 

1 Some argue that we can think of individual putative causes as “individually unnecessary” but as “jointly sufficient.” In the 2000 US Presidential Election, for example, consider the presence of Ralph Nader on the ballot, delays in counting the vote in some jurisdictions, the Monica Lewinsky scandal, and other phenomena such as the “butterfly ballot” in Palm Beach County, Florida. Each of these might have been unnecessary to lead the election to be called for G.W. Bush, but they were jointly sufficient to do so.

Parler and the Problems of a “Free Speech” Social Network

Image of many blank speech bubbles forming a cloud

Twitter is something of a mess. It has been criticized by individuals from both ends of the political spectrum for either not doing enough to stem the tide of misinformation and hateful content, or of doing too much, and restricting what some see as their right to free expression. Recently, some of those who have chastised the platform for restricting free speech have called for a move to a different social media platform, one where opinions – particularly conservative opinions – could be expressed without fear of censorship. A Twitter-alternative that has seen substantial growth recently is called Parler: calling itself the “Free Speech Social Network,” its userbase gained almost half a million users in a single week, partially because of a backlash to Twitter’s recent fact-checking of a Tweet made by Donald Trump. Although the CEO of Parler stated that he wanted the platform to be a space in which anyone on the political spectrum could participate in discussions without fear of censorship, there is no question that it has become dominated by those on the political right.

It is perhaps easy to understand the appeal of such a platform: if one is worried about censorship, or if one wants to engage with those who have divergent political opinions, one might think that a forum in which there are fewer restrictions on what can be expressed would be beneficial for productive debate. After all, some have expressed concern about online censorship, specifically in terms of what is seen as an overreactive “cancel culture,” in which individuals are punished (some say disproportionately) for expressing their opinions. For example consider the following from a recent article in Harper’s Magazine, titled “A Letter on Justice and Open Debate”:

“The restriction of debate, whether by a repressive government or an intolerant society, invariably hurts those who lack power and makes everyone less capable of democratic participation. The way to defeat bad ideas is by exposure, argument, and persuasion, not by trying to silence or wish them away.”

So, what better way to defeat bad ideas than to provide a platform in which they can be brought out into the open, carefully considered, and argued away? Isn’t a “Free Speech Social Network” a good idea?

Not really. An assumption for the argument in favor of a platform that allows uncensored expressions of opinions is that while it may see an increase in the number of hateful or uninformed views, the benefits of having those ideas in the open to analyze and argue against will outweigh the costs. Indeed, the hope is that a lack of censorship or fact-checking will make debate more productive, and that by allowing the expression of “bad ideas” we can, in fact, “defeat” them. In reality, the platform is awash with dangerous misinformation and conspiracy theories, and while contrarian views are occasionally presented, there is little in the way of productive debate to be found.

Here’s an example. With over 400 thousand followers on Parler, libertarian politician Ron Paul’s videos from the “Ron Paul Institute for Peace and Prosperity” receive thousands of positive votes and comments. Many of these videos have recently expressed skepticism about the dangers of coronavirus: specifically, they call into question the efficacy of tests for the virus, claiming that reports of numbers of cases have been inflated or fabricated, and argue that being made to wear facemasks is a violation of personal liberties. These views fall squarely into the camp of “bad ideas.” One might hope, though, that the community would respond with good reasons and rational debate.

Instead, we get a slew of even worse misinformation. For example, here is a representative sample of some recent comments on Paul’s video titled “Should We Trust The Covid Tests?”:

“My friends husband is world renown doctor. He is getting calls from doctors all over USA and World that tell him CV-19 Numbers are being forged.”

“Nurse all over are saying they are testing the same persons over and over and just building up the numbers not counting them as the same case, but seperate cases. Am against shut down period.”

“No. Plain and simple. COVID tests are increasingly being proven to be lies. Unless you believe the worthless MSM liberal sheep lie pushers.”

The kinds of comments are prevalent, and, as can be seen, are not defeating bad ideas, but rather reinforcing them.

Herein lies the problem: productive debate will not just magically happen once we unleash all the bad ideas into a forum. While some may be examined and defeated, others will receive support and become stronger for having been given the room to grow. Without putting any kind of restriction on the expression of misleading and false information we then risk emboldening those looking to spread politically-motivated misinformation and conspiracy theories. The result is that these bad ideas become more difficult to defeat, not easier.

If one is concerned that potential censorship on social media networks like Twitter will stifle debate, what Parler has shown so far is that a “free speech” social network is good for little other than expressing views that one would be banned for expressing elsewhere. Contrary to Parler’s stated motivations and the concerns expressed in the Harper’s letter, mere exposure is not a panacea for the problem of the bad ideas being expressed on the internet.

Infodemics and Good Epistemic Hygiene

3d rendering of bacteria under a microscope

There has been a tremendous amount of news lately about the outbreak and spread of COVID-19, better known as the coronavirus. And while having access to up-to-date information about a global health crisis can certainly be a good thing, there have been worries that the amount of information out there has become something of a problem itself. So much so that the World Health Organization (WHO) has stated that they are concerned that the epidemic has led to an “infodemic”: the worry is that with so much information it will be difficult both for people to process all of it, and to determine what they should trust and which they should ignore.

With so much information swirling about there is already a Wikipedia page dedicated to all the various rumors and conspiracy theories surrounding the virus. For instance, some of the more popular conspiracy theories state that the virus is a human-made biological weapon (it isn’t), and that there are vaccines already available but are just being kept from the public (there aren’t). It shouldn’t be surprising that social media is the most fertile breeding ground for misinformation, with large Facebook groups spreading not only falsehoods but supposed miracle cures, some of which are extremely dangerous.

In response to these problems, sites like Facebook, Google, and Twitter have been urged to take steps to try to help cull the infodemic by employing fact-checking services, providing free advertising for the WHO, and by trying to make sure that when looking for information about coronavirus online that reputable sources are those that dominate the results.

While all of this is of course a good thing, what should the individual person do when faced with such an infodemic? It is, of course, always a good idea to be vigilant when acquiring information online, especially when that information is coming from social media. But perhaps just as we should engage in more conscientious physical hygiene, we should also engage in a more substantial epistemic hygiene, as well. After all, the spreading of rumors and misinformation can itself lead to harms, so it seems that we should make extra sure that we aren’t forming and spreading beliefs in a way that can potentially be damaging to others.

What might good epistemic hygiene look like in the face of an infodemic? Perhaps we can draw some parallels from the suggested practices for good physical hygiene from the WHO. Some of the main suggestions from the WHO include:

  • Washing hands frequently
  • Maintaining social distance
  • Practicing good respiratory hygiene (like covering your mouth when you cough or sneeze)
  • Staying informed

These are all good ways to minimize chances of contracting or spreading diseases. What parallels could we draw from this when it comes to the infodemic? While the physical act of hand-washing is unlikely to stop the spread of misinformation, perhaps a parallel when it comes to forming beliefs would be to make extra careful which sources we’re getting our information from, and to critically reflect upon our beliefs if we do get information from a less than trustworthy source. Just as taking a little extra time to make sure your hands are clean can help control the spread of disease, so could taking some extra time to critically reflect help control the spread of misinformation.

Maintaining a kind of social distance might be a good idea, as well: as we saw above, the majority of misinformation about the epidemic comes from social media. If we are prone to looking up the latest gossip and rumors, it might be best to just stay out of those Facebook groups altogether. Similarly, just as it’s a good idea to try to protect others by coughing or sneezing into your arm, so too is it a good idea to keep misinformed ideas to yourself. If you feel like you want to spread gossip or information you’ve acquired from some other less-than-reputable source, instead of spreading it around further by posting or commenting on social media, the best thing would be to try to stop the spread as much as possible.

Finally, the WHO does suggest that it is a good idea to stay informed. Again, we have seen that there are better and worse ways of doing this. Staying informed does not mean acquiring information from just anywhere, nor does it mean getting as much information as is humanly possible. In the light of an infodemic one needs to be that much more vigilant and responsible when it comes to the potential spread of misinformation.

Twitter Bots and Trust

photograph of chat bot figurine in front of computer

Twitter has once again been in the news lately, which you know can’t be a good thing. The platform recently made two sets of headlines: in the first, news broke that a number of Twitter accounts were making identical tweets in support of Mike Bloomberg and his presidential campaign, and in the second, reports came out of a significant number of bots making tweets denying the reality of human-made climate change.

While these incidents are different in a number of ways, they both illustrate one of the biggest problems with Twitter: given that we might not know anything about who is making an actual tweet – whether it is a real person, a paid shill, or a bot – it is difficult to know who or what to trust. This is especially problematic when it comes to the kind of disinformation tweeted out by bots about issues like climate change, where it can not only be difficult to tell whether it comes from a trustworthy source, but also whether the content of the tweet makes any sense.

Here’s the worry: let’s say that I see a tweet declaring that “anthropogenic climate change will result in sea levels rising 26-55 cm. in the 21st century with a 67% confidence interval.” Not being a scientist myself, I don’t have a good sense of whether or not this is true. Furthermore, if I were to look into the matter there’s a good chance that I wouldn’t be able to determine whether the relevant studies that were performed were good ones, whether the prediction models were accurate, etc. In other words, I don’t have much to go on when determining whether I should accept what is tweeted out at me.

This problem is an example of what epistemologists have referred to as the problem of expert testimony: if someone tells me something that I don’t know anything about, then it’s difficult for me, as a layperson, to be critical of what they’re telling me. After all, I’m not an expert, and I probably don’t have the time to go and do the research myself. Instead, I have to accept or reject the information on the basis of whether I think the person providing me with information is someone I should listen to. One of the problems with receiving such information over Twitter, then, is that it’s very easy to prey on that trust.

Consider, for example, a tweet from a climate-change denier bot that stated “Get real, CNN: ‘Climate Change’ dogma is religion, not science.” While this tweet does not provide any particular reason to think that climate science is “dogma” or “religion,” it can create doubt in other information from trustworthy sources. One of the co-authors of the bot study worries that these kinds of messages can also create an illusion of “a diversity of opinion,” with the result that people “will weaken their support for climate science.”

The problem with the pro-Bloomberg tweets is similar: without a way of determining whether a tweet is actually coming from a real person as opposed to a bot or a paid shill, messages that defend Bloomberg may be ones intended to create doubt in tweets that are critical of him. Of course, in Bloomberg’s case it was a relatively simple matter to determine that the messages were not, in fact, genuine expressions of support for the former mayor, as dozens of tweets were identical in content. But a competently run network of bots could potentially have a much greater impact.

What should one do in this situation? As has been written about before here, it is always a good idea to be extra vigilant when it comes to getting one’s information from Twitter. But our epistemologist friends might be able to help us out with some more specific advice. When dealing with information that we can’t evaluate on the basis of content alone – say, because it’s about something that I don’t really know much about – we can look to some other evidence about the providers of that information in order to determine whether we should accept it.

For instance, philosopher Elizabeth Anderson has argued that there are generally three categories of evidence that we can appeal to when trying to decide whether we should accept some information: someone’s expertise (with factors including testifier credentials, and whether they have published and are recognized in their field), their honesty (including evidence about conflicts of interest, dishonesty and academic fraud, and making misleading statement), and the extent to which they display epistemic responsibility (including evidence about the ways in which one has engaged with the scientific community in general and their peers specifically). This kind of evidence isn’t a perfect indication of whether someone is trustworthy, and it might not be the easiest to find. When one is trying to get good information from an environment that is potentially infested with bots and other sources of misleading information, though, gathering as much evidence as one can about one’s source may be the most prudent thing to do.

Twitter and Disinformation

black and white photograph of carnival barker

At the recent CES event, Twitter’s director of product management Suzanne Xie announced some proposed changes to Twitter which are aimed to begin rolling out in a beta version this year. They represent fundamental and important changes to the ways that conversations are had on the platform, including the ability to make tweets to limited groups of users (as opposed to globally), and perhaps the biggest change, tweets that cannot be replied to (what Twitter is calling “statements”). Xie stated that the changes were meant to prevent what are seen by Twitter as unhealthy behavior by its users, including “getting ratio’d” (when one’s Tweet receives a very high ratio of replies to likes, which is taken to represent general disapproval), and “getting dunked on” (a phenomenon in which the replies to one’s tweet are very critical, often going into detail about why the original poster was wrong).

If you have spent any amount of time on Twitter you have no doubt come across the kind of toxic behavior that the platform has become infamous for: people being rude, insulting, and aggressive is commonplace. So one might think that any change that might reduce this toxicity should be welcomed.

The changes that Twitter is proposing, however, could have some seriously negative consequences, especially when it comes to the potential for spreading misinformation.

First things first: when people act aggressively and threatening on Twitter, they are acting badly. While there are many parts of the internet that can seem like cesspools of vile opinions (various parts of YouTube, Facebook, and basically every comment section on any news website), Twitter has long had the reputation of being a place where nasty prejudices of any kind you can imagine run amok. Twitter itself has recognized that people who use the platform for the expression of racist, sexist, homophobic, and transphobic views (among others) are a problem, and have in the past taken some measures to curb this kind of behavior. It would be a good thing, then, if Twitter could take further steps to actually deter this kind of behavior.

The problem with allowing users the ability to Tweet in such a way that it cannot receive any feedback, though, is that the community can provide valuable information about the quality and trustworthiness about the content of a tweet. Consider first the phenomenon of “getting ratio’d”. While Twitter gives users the ability to endorse Tweets – in the form of “hearts” – it does not have any explicit mechanism in place that can allow users to show their disapproval – there is no non-heart equivalent. In the absence of a disapproval mechanism, Twitter users generally take a high ratio of replies-to-hearts to be an indication of disapproval (there are exceptions to this: when someone asks a question or seeks out advice, they may receive a lot of replies, thus resulting in a relatively high ratio that signals engagement as opposed to disapproval). Community signaling of disapproval can provide important information, especially when it comes from individuals in positions of power. For example, if a politician makes a false or spurious claim, their getting ratio’d can indicate to others that the information being presented should not be accepted uncritically. In the absence of such a mechanism it is much more difficult to determine the quality of information.

In addition to the quantity of responses that contribute to a ratio, the content of those responses can also help others determine whether the content of a tweet should be accepted. Consider, for example, the existence of a world leader who does not believe that global warming is occurring, and who tweets as such to their many followers. If this tweet were merely made as a statement without the possibility of a conversation occurring afterwards, those who believe the content of the tweet will not be exposed to arguments that correctly show it to be false.

A concern with limiting the kinds of conversations that can occur on Twitter, then, is that preventing replies can seriously limit the ability of the community to indicate that one is spreading misinformation. This is especially worrisome, given recent studies that suggest that so-called “fake news” can spread very quickly on Twitter, and in some cases much more quickly than the truth.

At this point, before the changes have been implemented, it is unclear whether the benefits will outweigh the costs. And while one should always be cautious when getting information from Twitter, in the absence of any possibility for community feedback it is perhaps worth employing an even healthier skepticism in the future.

Search Engines and Data Voids

photograph of woman at computer, Christmas tree in background

If you’re like me, going home over the holidays means dealing with a host of computer problems from well-meaning but not very tech-savvy family members. While I’m no expert myself, it is nevertheless jarring to see the family computer desktop covered in icons for long-abandoned programs, browser tabs that read “Hotmail” and “how do I log into my Hotmail” side-by-side, and the use of default programs like Edge (or, if the computer is ancient enough, Internet Explorer) and search engines like Bing.

And while it’s perhaps a bit of a pain to have to fix the same computer problems every year, and it’s annoying to use programs that you’re not used to, there might be more substantial problems afoot. This is because according to a recent study from Stanford’s Internet Observatory, Bing search results “contain an alarming amount of disinformation.” That default search engine that your parents never bothered changing, then, could actually be doing some harm.

While no search engine is perfect, the study suggests that, at least in comparison to Google, Bing lists known disinformation sites in its top results much more frequently (including searches for important issues like vaccine safety, where a search for “vaccines autism” returns “six anti-vax sites in its top 50 results”). It also presents results from known Russian propaganda sites much more frequently than Google, places student-essay writing sites in its top 50 results for some search terms, and is much more likely to “dredge up gratuitous white-supremacist content in response to unrelated queries.” In general, then, while Bing will not necessarily present one only with disinformation – the site will still return results for trustworthy sites most of the time – it seems worthwhile to be extra vigilant when using the search engine.

But even if one commits to simply avoiding Bing (at least for the kinds of searches that are most likely to be connected to disinformation sites), problems can arise when Edge is made a default browser (which uses Bing as its default search engine), and when those who are not terribly tech-savvy don’t know how to use a different browser, or else aren’t aware of the alternatives. After all, there is no particular reason to think that results from different search engines should be different, and given that Microsoft is a household name, one might not be inclined to question the kinds of results their search engine provides.

How can we combat these problems? Certainly a good amount of responsibility falls on Microsoft themselves for making more of an effort to keep disinformation sites out of their search results. And while we might not want to say that one should never use Bing (Google knows enough about me as it is), there is perhaps some general advice that we could give in order to try to make sure that we are getting as little disinformation as possible when searching.

For example, the Internet Observatory report posits that one of the reasons why there is so much more disinformation in search results from Bing as opposed to Google is due to how the engines deal with “data voids.” The idea is the following: for some search terms, you’re going to get tons of results because there’s tons of information out there, and it’s a lot easier to weed out possible disinformation sites about these kinds of results because there are so many more well-established and trusted sites that already exist. But there are also lots of search terms that have very few results, possibly because they are about idiosyncratic topics, or because the search terms are unusual, or just because the thing you’re looking for is brand new. It’s when there are these relative voids of data about a term that makes results ripe for manipulation by sites looking to spread misinformation.

For example, Michael Golebiewski and danah boyd write that there are five major types of data voids that can be most easily manipulated: breaking news, strategic new terms (e.g. when the term “crisis actor” was introduced by Sandy Hook conspiracy theorists), outdated terms, fragmented concepts (e.g. when the same event is referred to by different terms, for example “undocumented” and “illegal aliens”), and problematic queries (e.g. when instead of searching for information about the “Holocaust” someone searches for “did the Holocaust happen?”). Since there tends to be comparatively little information about these topics online, those looking to spread disinformation can create sites that exploit these data voids.

Golebiewski and boyd provide an example in which the term “Sutherland Springs, Texas” was a much more popular search than it had ever previously been in response to news reports of an active shooting in November of 2017. However, since there was so little information online about Sutherland Springs prior to the event, it was more difficult for search engines to determine out of the new sites and posts which should be sent to the top of the search results and which should be sent to the bottom of the pile. This is the kind of data void that can be exploited by those looking to spread disinformation, especially when it comes to search engines like Bing that seem to struggle with distinguishing trustworthy sites from the untrustworthy.

We’ve seen that there is clearly some responsibility on Bing itself to help with the flow of disinformation, but we perhaps need to be more vigilant when it comes to trusting sites with regards to the kinds of terms Golebiewski and boyd describe. And, of course, we could try our best to convince those who are less computer-literate in our lives to change some of their browsing habits.