← Return to search results
Back to Prindle Institute
Technology

Calibrating Trust Amidst Information Chaos

By Kenneth Boyd
23 Jan 2023
photograph of Twitter check mark on iphone with Twitter logo in background

It’s been a tumultuous past few months on Twitter. Ever since Elon Musk’s takeover, there have been almost daily news stories about some change to the company or platform, and while there’s no doubt that Musk has his share of fans, many of the changes he’s made have not been well-received. Many of these criticisms have focused on questionable business decisions and almost unfathomable amounts of lost money, but Musk’s reign has also produced a kind of informational chaos that makes it even more difficult to identify good sources of information on Twitter.

For example, one early change that received a lot of attention was the introduction of the “paid blue check mark,” where one could pay for the privilege of having what was previously a feature reserved for notable figures on Twitter. This infamously led to a slew of impersonators creating fake accounts, the most notable being the phony Eli Lilly account that had real-world consequences. In response, changes were made: the paid check system was modified, then re-modified, then color-coded, then the colors changed, and now it’s not clear how the system will work in the future. Additional changes have been proposed, such as a massive increase in the character limits for tweets, although it’s not clear if they will be implemented.  Others have recently made their debut, such as a “view count” that has been added to each tweet, next to “replies,” “retweets,” and “likes.”

It can be difficult to keep up with all the changes. This is not a mere annoyance: since it’s not clear what will happen next, or what some of the symbols on Tweets really represent anymore – such as those aforementioned check marks – it can be difficult for users to find their bearings in order to identify trustworthy sources.

More than a mere cause of confusion, informational chaos presents a real risk of undermining the stability of online indicators that help people evaluate online information.

When evaluating information on social media, people appeal to a range of factors to determine whether they should accept it, for better or for worse. Some of these factors include visible metrics on posts, such as how many times it’s been approved of – be it in the form of a “like” or a “heart” or an “upvote,” etc. – shared, or interacted with in the form of comments, replies, or other measures. This might seem to be a blunt and perhaps ineffective way of evaluating information, but it’s not just that people tend to believe what’s popular: given that in many social media it’s easy to misrepresent oneself and generally just make stuff up, users tend to look to aspects of their social media experience that cannot easily be faked. While it’s of course not impossible to fabricate numbers of likes, retweets, and comments, it is at least more difficult to do so, and so these kinds of markers often serve as quick heuristics to determine if some content is worth engaging with.

There are others. People will use the endorsement of sources they trust when evaluating an unknown source, and the Eli Lilly debacle showed how people used the blue check mark at least as an indicator of authenticity – unsurprisingly, given its original function. Similar markers play the same role on other social media sites – the “verified badge” on Instagram, for example, at least gives users the information that the given account is authentic; although it’s not clear how much “authenticity” translates to “credibility.”

(For something that is so often coveted among the influencers and influencer-wannabes there appears to be surprisingly little research on the actual effects of verification on levels of trust among users: some studies seem to suggest that it makes little to no difference in perceived trustworthiness or engagement, while others suggest the opposite).

In short: the online world is messy, and it can be hard to get one’s bearings when evaluating the information that comes at one constantly on social media.

This is why making sudden changes to even superficial markers of authenticity and credibility can make this problem significantly worse. While people might not be the best at interpreting these markers in the most reliable ways, having them be stable can at the very least allow us to consider how we should respond to them.

It’s not as though this is the first change that’s been made to how people evaluate entries on social media. In late 2021, YouTube removed publicly-visible counts of how many dislikes videos received, a change that arguably made it more difficult to identify spam, off-topic, or otherwise low-quality videos at a glance. While relying on a heuristic like “don’t trust videos with a bunch of dislikes” is not always going to lead you to the best results, having a stable set of indicators can at least help users calibrate their levels of trust.

So, it’s not that users will be unable to adjust to changes to their favorite online platforms. But with numerous changes of uncertain value or longevity comes disorientation. Combined with Musk’s recent unbanning of accounts that were previously deemed problematic, resulting in the overall increase of misinformation being spread around the site, conditions are made even worse for those looking for trustworthy sources of information online.

Ken Boyd holds a PhD in philosophy from the University of Toronto. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com
Related Stories