← Return to search results
Back to Prindle Institute

Ferguson and Net Neutrality

The shooting of Michael Brown and the aftermath of unrest in Ferguson raises a host of obvious ethical issues. One that isn’t so obvious is that it highlights the potential importance of knowing about net neutrality and algorithmic filtering. As Zeynup Terfecki has recently argued, what happened in Ferguson is a way to illustrate how net neutrality may be a human rights issue.

Let’s get clear about both terms first. “Net Neutrality” is the thesis that internet service providers should be neutral and enable access to all content and websites available on the internet. Furthermore, they should not favor, speed-up, block, or slow-down content and services from any website. Proponents of net neutrality hold that internet service providers should not be permitted to privilege content from websites and permit them to access a fast lane. Service providers also should not be allowed to intentionally slow down access to content from websites. If you agree with those sentiments, then you are a fan of net neutrality. If, however, you think a company like Verizon should be able to reach an agreement with ESPN and let ESPN deliver streaming content more quickly to Verizon users, then you favor what I will call a “Tiered Internet”. I won’t go into the pros and cons of net neutrality here, but if you’re interested this Wikipedia entry has a nice summary of the arguments for and against net neutrality.

Now let’s talk about algorithmic filtering. Algorithmic filtering is simply a process by which computer programs instruct computers (via an algorithm) to scan a large amount of information and pull out the bits they are interested in. It has a ton of very useful applications, but we’re interested in how it can be used in the internet service industry. Algorithmic filtering is one of the things that differentiates Facebook from Twitter. Facebook carefully curates what shows up in your feed based on a secret algorithm that (in theory) will prioritize content that Facebook wants you to see. Twitter doesn’t do this. With Twitter you see an unfiltered stream of the tweets of everyone you follow in chronological order.

Terfecki noticed something odd about this difference. Her Twitter feed was loaded with tweets about Ferguson, but when she switched over to Facebook she saw nothing. Facebook’s filtering had managed to bury anything about Ferguson. No one is accusing Facebook of intentionally suppressing Ferguson posts, it’s just that for some reason Facebook’s algorithm prioritized things in such a way that Ferguson posts (for Terfecki) didn’t make the cut. Ferguson posts likely made the cut for many people. So the lesson is, algorithmic filtering has the potential to unintentionally hide important stuff from you that you care deeply about, as it did in Terfecki’s case.

However, there is another important lesson to be learned here, and this is Terfecki’s primary point. What Terfecki wants us to do is take a step back and think carefully about net neutrality. Why? Because tiered internet uses algorithmic filtering. If Verizon wants to prioritize ESPN content, it needs to write a program that automatically filters through everything streaming across its networks, flag content coming from ESPN, and route it to the fast lane.

Why should this have us concerned? Because the way in which service providers could filter and prioritize content are only limited by a programmer’s imagination. Internet Service Providers could, for example, get into the PR business. Are you planning on running for Governor? Do you want that embarrassing story about you from your time at college to go away? Service providers could charge a premium to speed up content from some sites that sing your praises and slow down content from sites that talk about your college days (assuming those sites haven’t out-bid you), and voilà – the public is blissfully unaware of your past misdeeds.

The possibility of a truly informed citizenry would be further threatened if news media conglomerates got in the service provider game. News companies (with an agenda) could suppress stories from competing news organizations or media watch-dog websites and drastically shape the political message the country is getting. This is why we need to have a serious conversation about net neutrality. It’s not just about people who want to download large files without being throttled. It’s not just about delivering people free ESPN content quickly, if ESPN is willing to foot the bill. It is, as Terfecki notes, a human rights issue about ensuring that everyone has an opportunity to be a meaningful and well-informed participant in the political process.

Events like the Michael Brown shooting occurred all of the time in the pre-internet era, and the public was simply unaware of it. The internet changed that. Events that we should all be talking about are coming to light for everyone in the country at the same time almost the instant they happen, but we now have the technology to change that. We are rapidly approaching a future where the next Ferguson could happen, and a vast majority of us will be clueless.

How Social Media Might Silence Debate

According to this study, social media may have a negative impact on political debate. As the opening of the study notes, in the pre-internet era, there is a well documented phenomenon called “The Spiral of Silence” in which people tend not to voice opinions that differ from their friends and family. The intro also notes that:

Some social media creators and supporters have hoped that social media platforms like Facebook and Twitter might produce different enough discussion venues that those with minority views might feel freer to express their opinions, thus broadening public discourse and adding new perspectives to everyday discussion of political issues.

However, it turns out that this may not be the case. It seems that increased activity on social network sites like Facebook and Twitter also have a negative impact on people’s willingness to voice dissenting opinions that they think might be unpopular. It appears that this behavior extends to the offline world as well.

The study involved 1,800 adults, and they focused on getting participants to discuss Edward Snowden’s disclosures of government surveillance programs. Here is a summary of the findings, taken directly from the study.

People were less willing to discuss the Snowden-NSA story in social media than they were in person
86% of Americans were willing to have an in-person conversation about the surveillance program, but just 42% of Facebook and Twitter users were willing to post about it on those platforms.

Social media did not provide an alternative discussion platform for those who were not willing to discuss the Snowden-NSA story. Of the 14% of Americans unwilling to discuss the Snowden-NSA story in person with others, only 0.3% were willing to post about it on social media.

In both personal settings and online settings, people were more willing to share their views if they thought their audience agreed with them. For instance, at work, those who felt their coworkers agreed with their opinion were about three times more likely to say they would join a workplace conversation about the Snowden-NSA situation.

Previous ‘spiral of silence’ findings as to people’s willingness to speak up in various settings also apply to social media users. Those who use Facebook were more willing to share their views if they thought their followers agreed with them. If a person felt that people in their Facebook network agreed with their opinion about the Snowden-NSA issue, they were about twice as likely to join a discussion on Facebook about this issue.

Facebook and Twitter users were also less likely to share their opinions in many face-to-face settings. This was especially true if they did not feel that their Facebook friends or Twitter followers agreed with their point of view. For instance, the average Facebook user (someone who uses the site a few times per day) was half as likely as other people to say they would be willing to voice their opinion with friends at a restaurant. If they felt that their online Facebook network agreed with their views on this issue, their willingness to speak out in a face-to-face discussion with friends was higher, although they were still only 0.74 times as likely to voice their opinion as other people.

What do you all think? Does this give us reason to think the social media participationg silences debate, as this New York Times discussion of the study suggests? What should we do in light of this?

Epistemology and Affirmative Action

A diverse group of young children walking down a path through a forest

Whatever your views about affirmative action are, this is well worth watching. Jonathan Jenkins Ichikawa, a philosophy professor at University of British Columbia, argues that recent psychological evidence about implicit bias may yield an argument in favor of affirmative action policies that is purely merit based. What is interesting and novel about his argument is that affirmative action policies are typically argued against because they tend to ignore merit based considerations. Ichikawa argues that even if you think we should only consider merit in hiring policies, we still have reason to favor certain affirmative action policies given what we know about implicit bias.

More Thoughts on Designing Addictive Video Games

A few weeks ago, Brian Crecente asked me to comment on whether or not I thought video game designers had a moral obligation to think about how they design games in light of recent evidence that some video games seem to be addictive. You can read the full article here, but here’s what I said:

I do think game developers have a moral obligation to think about their game design in light of recent evidence we have concerning the addictiveness, he said. It’s easy to dismiss abuse of a product as a personal choice of the consumer, but as evidence of addiction for any product grows — it becomes less clear how much choice is involved.

What’s more troubling about this phenomenon, is that the business model for games has changed in two important ways that make it very tempting for developers to try and create a game that is addictive. In-app purchase and subscription-based models are more lucrative if the consumer can’t stop playing, as are free games that rely on cost-per-click advertisements. You only make money off your users if they keep coming back to play, and the more addicted they are to the game, the more likely you are to make money off their clicks…Making an addictive game is the obvious choice for maximizing revenue in these new ways….so, I’m worried that we’ll not see developers shy away from actively trying to create addictive games.

That article generated some interesting discussion, and there were three objections that seemed to come up more than once. I thought I’d say something about those three objections. Here they are, followed by my replies.

Objection one: If you think there is something wrong with designing addictive video games, then you must think there is something wrong with developing any kind of product that people become compulsive about. People play golf more than they should. People drink soda more than they should. Are golf ball manufacturers and soft drink companies doing something wrong?

Reply: My short answer to the last question is, “No, I don’t think merely manufacturing something that you know has the potential to be addictive is wrong.”However, there is an important moral difference between designing a product you know might be addictive and designing it so that it is addictive, with the intent to exploit some feature of a person’s compulsive psychology. Imagine a baker intentionally included an ingredient that made his cakes addictive. And he included the addictive elements solely for the purpose of increasing sales. That’s importantly morally different from someone who makes a cake, because they are delicious and people like to eat delicious things.Video games, of any kind, provide a leisurely activity that people might have a hard time walking away from. But the kind of activity, I’m talking about isn’t merely making video games. It’s intentionally designing the games so that they include the addictive elements, specifically for the purpose of hooking people.

Objection two: Calling video games addictive is medically naive and displays a lack of awareness about the true nature of addiction. Real addiction is characterized by adverse physiological effects and withdrawal symptoms that exert pressure on a person’s will

Reply: I am well aware of this distinction, and I readily admit that there there is a big (physical) difference between someone addicted to caffeine and someone addicted (in the broader sense) to gambling (or in this case video games). However, there still likely is a big difference in the psychological makeup of someone addicted to gambling (or video gaming) and someone who is not. There is something that exerts pressure on the gambling addicts’ will that a vast majority of other people don’t experience. That difference is the morally relevant feature. So even if we shouldn’t call this “addiction”, there is something present here that deserves serious moral attention.

Objection Three: Calling video games “addictive” just serves to shield bad behavior. People can hide under the label of addiction and ask society to take it easy when judging them. These persons should still be held accountable for the negative consequences of their behavior.

Reply: I agree. But I can agree with that, and still consistently maintain that we ought to think twice about our design plans, especially if we feel ourselves tempted to include an element simply because it’s addictive.

Ultimately, it still seems clear to me that designers should examine their own intentions when developing game elements. The temptation to include elements because they are addictive is very real, and something we should all be concerned about.

Social Media Experiments: Where Should We Draw the Line?

Consent may be important in the dating world, but not so much to online dating site, OkCupid. Christian Rudder, CEO of OkCupid, admitted in a blog post that the site had run several experiments on its users over previous months, without their knowledge. Perhaps the most controversial of these experiments was one in which the site told users that they were a much better match with someone (a percentage of alleged compatibility based on an algorithm) than they actually were. It was found that more messages were sent when people were told that they were a 90% match, even if they were truly a 30% match.

OkCupid CEO Christian Rudder’s defense is that the experiments were necessary to improve the operation of the site and maximize users’ matches. In the Terms and Conditions that each user agrees to, the company does state that people’s data may be used, but informed consent for the experiments was neither requested nor granted . Rudder also claims that these experiments are commonplace on the internet and should just be accepted without question by society, saying that “if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site.”

Academic research done on dating or social media sites typically involves observing user profiles and finding statistical correlations, though never outright manipulation of users’ profiles. In this regard, the OkCupid experiments, as well as the previous Facebook experiments of a similar nature, cross into questionable territory.

Should corporations be held to the same standards as academic research? Or were the OkCupid experiments justified in order to provide a better service for its users?