← Return to search results
Back to Prindle Institute

Twitter and Disinformation

black and white photograph of carnival barker

At the recent CES event, Twitter’s director of product management Suzanne Xie announced some proposed changes to Twitter which are aimed to begin rolling out in a beta version this year. They represent fundamental and important changes to the ways that conversations are had on the platform, including the ability to make tweets to limited groups of users (as opposed to globally), and perhaps the biggest change, tweets that cannot be replied to (what Twitter is calling “statements”). Xie stated that the changes were meant to prevent what are seen by Twitter as unhealthy behavior by its users, including “getting ratio’d” (when one’s Tweet receives a very high ratio of replies to likes, which is taken to represent general disapproval), and “getting dunked on” (a phenomenon in which the replies to one’s tweet are very critical, often going into detail about why the original poster was wrong).

If you have spent any amount of time on Twitter you have no doubt come across the kind of toxic behavior that the platform has become infamous for: people being rude, insulting, and aggressive is commonplace. So one might think that any change that might reduce this toxicity should be welcomed.

The changes that Twitter is proposing, however, could have some seriously negative consequences, especially when it comes to the potential for spreading misinformation.

First things first: when people act aggressively and threatening on Twitter, they are acting badly. While there are many parts of the internet that can seem like cesspools of vile opinions (various parts of YouTube, Facebook, and basically every comment section on any news website), Twitter has long had the reputation of being a place where nasty prejudices of any kind you can imagine run amok. Twitter itself has recognized that people who use the platform for the expression of racist, sexist, homophobic, and transphobic views (among others) are a problem, and have in the past taken some measures to curb this kind of behavior. It would be a good thing, then, if Twitter could take further steps to actually deter this kind of behavior.

The problem with allowing users the ability to Tweet in such a way that it cannot receive any feedback, though, is that the community can provide valuable information about the quality and trustworthiness about the content of a tweet. Consider first the phenomenon of “getting ratio’d”. While Twitter gives users the ability to endorse Tweets – in the form of “hearts” – it does not have any explicit mechanism in place that can allow users to show their disapproval – there is no non-heart equivalent. In the absence of a disapproval mechanism, Twitter users generally take a high ratio of replies-to-hearts to be an indication of disapproval (there are exceptions to this: when someone asks a question or seeks out advice, they may receive a lot of replies, thus resulting in a relatively high ratio that signals engagement as opposed to disapproval). Community signaling of disapproval can provide important information, especially when it comes from individuals in positions of power. For example, if a politician makes a false or spurious claim, their getting ratio’d can indicate to others that the information being presented should not be accepted uncritically. In the absence of such a mechanism it is much more difficult to determine the quality of information.

In addition to the quantity of responses that contribute to a ratio, the content of those responses can also help others determine whether the content of a tweet should be accepted. Consider, for example, the existence of a world leader who does not believe that global warming is occurring, and who tweets as such to their many followers. If this tweet were merely made as a statement without the possibility of a conversation occurring afterwards, those who believe the content of the tweet will not be exposed to arguments that correctly show it to be false.

A concern with limiting the kinds of conversations that can occur on Twitter, then, is that preventing replies can seriously limit the ability of the community to indicate that one is spreading misinformation. This is especially worrisome, given recent studies that suggest that so-called “fake news” can spread very quickly on Twitter, and in some cases much more quickly than the truth.

At this point, before the changes have been implemented, it is unclear whether the benefits will outweigh the costs. And while one should always be cautious when getting information from Twitter, in the absence of any possibility for community feedback it is perhaps worth employing an even healthier skepticism in the future.

Search Engines and Data Voids

photograph of woman at computer, Christmas tree in background

If you’re like me, going home over the holidays means dealing with a host of computer problems from well-meaning but not very tech-savvy family members. While I’m no expert myself, it is nevertheless jarring to see the family computer desktop covered in icons for long-abandoned programs, browser tabs that read “Hotmail” and “how do I log into my Hotmail” side-by-side, and the use of default programs like Edge (or, if the computer is ancient enough, Internet Explorer) and search engines like Bing.

And while it’s perhaps a bit of a pain to have to fix the same computer problems every year, and it’s annoying to use programs that you’re not used to, there might be more substantial problems afoot. This is because according to a recent study from Stanford’s Internet Observatory, Bing search results “contain an alarming amount of disinformation.” That default search engine that your parents never bothered changing, then, could actually be doing some harm.

While no search engine is perfect, the study suggests that, at least in comparison to Google, Bing lists known disinformation sites in its top results much more frequently (including searches for important issues like vaccine safety, where a search for “vaccines autism” returns “six anti-vax sites in its top 50 results”). It also presents results from known Russian propaganda sites much more frequently than Google, places student-essay writing sites in its top 50 results for some search terms, and is much more likely to “dredge up gratuitous white-supremacist content in response to unrelated queries.” In general, then, while Bing will not necessarily present one only with disinformation – the site will still return results for trustworthy sites most of the time – it seems worthwhile to be extra vigilant when using the search engine.

But even if one commits to simply avoiding Bing (at least for the kinds of searches that are most likely to be connected to disinformation sites), problems can arise when Edge is made a default browser (which uses Bing as its default search engine), and when those who are not terribly tech-savvy don’t know how to use a different browser, or else aren’t aware of the alternatives. After all, there is no particular reason to think that results from different search engines should be different, and given that Microsoft is a household name, one might not be inclined to question the kinds of results their search engine provides.

How can we combat these problems? Certainly a good amount of responsibility falls on Microsoft themselves for making more of an effort to keep disinformation sites out of their search results. And while we might not want to say that one should never use Bing (Google knows enough about me as it is), there is perhaps some general advice that we could give in order to try to make sure that we are getting as little disinformation as possible when searching.

For example, the Internet Observatory report posits that one of the reasons why there is so much more disinformation in search results from Bing as opposed to Google is due to how the engines deal with “data voids.” The idea is the following: for some search terms, you’re going to get tons of results because there’s tons of information out there, and it’s a lot easier to weed out possible disinformation sites about these kinds of results because there are so many more well-established and trusted sites that already exist. But there are also lots of search terms that have very few results, possibly because they are about idiosyncratic topics, or because the search terms are unusual, or just because the thing you’re looking for is brand new. It’s when there are these relative voids of data about a term that makes results ripe for manipulation by sites looking to spread misinformation.

For example, Michael Golebiewski and danah boyd write that there are five major types of data voids that can be most easily manipulated: breaking news, strategic new terms (e.g. when the term “crisis actor” was introduced by Sandy Hook conspiracy theorists), outdated terms, fragmented concepts (e.g. when the same event is referred to by different terms, for example “undocumented” and “illegal aliens”), and problematic queries (e.g. when instead of searching for information about the “Holocaust” someone searches for “did the Holocaust happen?”). Since there tends to be comparatively little information about these topics online, those looking to spread disinformation can create sites that exploit these data voids.

Golebiewski and boyd provide an example in which the term “Sutherland Springs, Texas” was a much more popular search than it had ever previously been in response to news reports of an active shooting in November of 2017. However, since there was so little information online about Sutherland Springs prior to the event, it was more difficult for search engines to determine out of the new sites and posts which should be sent to the top of the search results and which should be sent to the bottom of the pile. This is the kind of data void that can be exploited by those looking to spread disinformation, especially when it comes to search engines like Bing that seem to struggle with distinguishing trustworthy sites from the untrustworthy.

We’ve seen that there is clearly some responsibility on Bing itself to help with the flow of disinformation, but we perhaps need to be more vigilant when it comes to trusting sites with regards to the kinds of terms Golebiewski and boyd describe. And, of course, we could try our best to convince those who are less computer-literate in our lives to change some of their browsing habits.

Ryke Geerd Hamer and the Dangers of Positive Thinking

Dr. Ryke Geerd Hamer died on July 2. It was hardly noticed in English language media. This is not surprising as, indeed, he was an obscure person. But, unfortunately, his legacy lives on, and the harm he has caused far outweighs the media attention that he has been given (Spanish and German newspapers have dedicated more attention to his death). Continue reading “Ryke Geerd Hamer and the Dangers of Positive Thinking”

Fake News and the Future of Journalism

Oscar Martinez is an acclaimed Salvadoran journalist for El Faro, an online newspaper that dedicates itself to conducting investigative journalism in Central America, with a focus on issues like drug trafficking, corruption, immigration, and inequality.  In a recent interview for El Pais, Martinez explains that the only reason he is a journalist is because “sé que sirve para mejorar la vida de algunas personas y para joder la vida de otras: poderosos, corruptos” (“ I know it serves to, both,  improve the lives of some people and to ruin the lives of others: the powerful, the corrupt.”) Ascribing himself to further reflection, in the interview, Martinez distills journalism’s purpose as a “mechanism” to bring about change in society; however, he does raise a red flag: “El periodismo cambia las cosas a un ritmo completamente inmoral, completamente indecente. Pero no he descubierto otro mecanismo para incidir en la sociedad de la que soy parte que escribiendo” (“Journalism changes things at a completely immoral and indecent rate. But I haven’t found another way to incite the society that I am writing in to change”). Martinez’s work sheds light and lends a voice to the plight of millions of individuals, and it is important to acknowledge and admire the invaluable work that Martinez and his colleagues at El Faro do.

Continue reading “Fake News and the Future of Journalism”