← Return to search results
Back to Prindle Institute

The Right to Block on Social Media

photograph of couple looking at smartphone behind window

On August 18, Elon Musk suggested that the blocking feature on Twitter, which allows users to prevent another user from interacting with them in any way on the site, would soon be removed. (Musk rebranded Twitter “X,” though it still uses the domain twitter.com. For clarity, I’ll refer to this social media site as Twitter, as many of its users still do.) Musk claimed that the feature would be limited to blocking direct messages, adding in a subsequent post that the feature “makes no sense.” This declaration was met with a good deal of criticism and disbelief among Twitter users.

Twitter’s CEO Linda Yaccarino later walked back the statement, claiming that the company is “building something better than the current state of block and mute.” Musk’s claim may be unlikely to be true anyway, since the guidelines for both the App Store and the Google Play Store appear to require that user-generated content must be able to be blocked in any app they offer.

But Musk’s suggestion raises the question of whether blocking on social media is something users have a right to. I won’t attempt to comment on any relevant legal right, but let’s consider the users’ moral rights.

First, a blocking ban violates our right to privacy. We have a right not to expose ourselves to content — and to people — on social media sites. The privacy achieved with blocking on social media goes in two directions: blocking keeps one’s own posts from being viewed by another user, and it also prevents the other user from contacting or interacting with the person who blocked them. In preventing another person from viewing one’s posts, a person can limit who accesses their personal information, thoughts, photos, and social interactions with other users. Even when posts are public, for users who aren’t public figures acting in a public capacity, the ability to retain some privacy when desired is valuable. Blocking is essential for achieving this privacy.

Privacy is also a matter of the ability to prevent someone else from entering your space. Twitter is a place where people meet others, interact, and learn about the world. It facilitates a unique kind of community-building, and thus is a unique kind of place — one that can at once be both public and intimate, involving networks of friends and parasocial relationships. Just as the ability to prevent an arbitrary person from broadcasting their thoughts into your home is essential for privacy, so also the ability to block interactions from someone on social media is an important means of privacy in that space.

Second, the ability to block an account on social media is necessary for safety. Blocking allows users to prevent future harassment, private messages, or hate speech from another user, thus protecting their mental health. By similar reasoning to a restraining order, the ability to block also protects the user from another user’s attempt to maintain unwanted contact or to stalk them through the site. Blocking alone doesn’t accomplish these goals perfectly, but it is necessary for achieving them for anyone who uses social media.

Important to both the above points is the lack of a feasible alternative to Twitter. It’s not always possible for someone to simply use another form of social media to prevent unwanted interactions. Not all platforms have the same functions or norms. The default public settings of Twitter (and its permitting anonymous accounts) make it a much different place from Facebook, which defaults to private posts and requires full names from its users. Twitter has been a successful home for activism and real-time crisis information. Despite recent attempts to launch competing sites, no other social media site compares to Twitter in terms of reach and, for better and worse, ethos. One can’t simply leave the party to avoid interactions as one does in real life; there’s no viable alternative place to go.

Third, blocking gives users more agency than reporting users for suspension or banning. Blocking is immediate, user-achieved, and not dependent on another entity’s approval. It is more efficient than reporting users for suspension or banning, because it does not require either the time or effort that goes into deciding the results of these reports. Neither does blocking depend on the blocked user having violated one of the terms of use on the site, such as rules against hate speech. If I can block another user for any personal reason whatsoever, I have much greater control over my social life online.

With these considerations in mind, it’s worth pointing out that one personal account blocking another is not a case of government censorship or online moderation. People are free to block for any reason whatsoever, without being beholden to principles about what a government or business may rightly censor. There are moral considerations when people act towards each other in any situation, so this is not to say that no moral considerations could make blocking wrong in a particular case. But individuals do not have a blanket moral obligation to allow others to say whatever they want to them, even though a government or the site itself might have no standing to prevent the person from saying it.

One worry you might have is that blocking could intensify echo chambers. An echo chamber is a social community bound by shared beliefs and insulated from opposing evidence. If a person always blocks people who challenge their political ideas, they will likely find themselves in an environment that’s not particularly conducive to forming justified beliefs. This effect is intensified if it should turn out that the blocking actions of a user are then fed into the algorithm that determines what posts show up on one’s social feed. If the algorithm favors displaying accounts the user is likely to find much in common with, then blocking gives highly useful information that would likely result in some further insulation from differing viewpoints beyond the specific account that the user blocks.

Outrage breeds engagement, so the algorithm may instead use information about blocks to show posts that might get the user riled up. But seeing chiefly the most extreme of opposing views does not necessarily diminish the strength of an echo chamber. In fact, if one’s political opponents are showed mostly in an extreme form most likely to generate engagement, their presence might actually serve as evidence that one’s own side of the issue is the rational one. So, even an algorithm that uses blocks as an indication of what outrages the user — and therefore feeds the user more of that — could contribute to a situation where one is insulated from opposing viewpoints. This issue stems from the broader structure and profit incentives of social media, but it is worth considering alongside the issues of privacy, safety, and agency discussed above — in part because the ability to foster an environment that is conducive to forming justified beliefs is itself an important part of safety and agency.

Although it is imperfect, blocking on social media protects users’ privacy, safety, and agency. It is not a matter of government or corporate censorship, and it is necessary for protecting the moral rights of users. Contrary to Musk’s claim, a blocking feature makes moral sense.

Privacy, Discrimination, and Facial Recognition at Airports

photograph of line of people with luggage at airport

If you find yourself traveling, you may notice that your identity is being verified in a new and different way. Instead of showing your ID to an employee in the security line, you may find that you’re asked to insert it into a machine while a camera captures your image. The machine software will then determine whether that image matches the person on your ID. Some airports use databases for identification so that the ID does not even need to be scanned.

The technology has been developed by the transportation security administration, and they’ve been quietly rolling it out at airports across the country. The primary advantages are that this system is potentially faster, easier, and more accurate. Airline travel in the middle of the 20th century was advertised as glamorous and comfortable. There now seems to be no end to the inconveniences travelers have to endure. To some, anything that makes the process less like an interrogation would count as an improvement.

On the other hand, many are alarmed to see this technology emerge without much warning. Some are concerned about the government having access to this kind of data. They are now allegedly using it to make airline travel easier, but there are lingering suspicions about what it could be used for in the future. It has become commonplace for people to become aware that a corporation has used their data for purposes to which they did not knowingly consent; data is sold to third parties and used for targeted advertising. For many, these concerns are even more troubling when the entity gathering the information is the government. The government could potentially build a database of everyone’s faces and use it in settings in which citizens would not be comfortable. For instance, while smart buildings offer significant potential for more environmentally friendly institutions, some are also designed with facial recognition technology. Some argue that this would be an improvement — the technology could recognize potential threats or disgruntled former employees before acts of violence can take place. Others respond that this benefit would not be worth the violation of privacy that would result — the government could potentially know where people are all the time, at least when they are in or near government buildings. If the moral right to privacy involves maintaining control over one’s own body, that right seems to be substantially violated when corporations and the government are cyberstalking people all of the time.

There are also serious concerns about how these systems will determine which individuals count as threats. People are concerned about what’s become familiar forms of algorithm bias. There is data to support the idea that facial recognition programs do less well identifying the faces of people of color. A recent study concluded that Native American, Black, and Asian people were 100% more likely to be misidentified than their white counterparts, and women were much more likely to be misidentified than men. (Middle-aged men had the highest accuracy rate of identification overall.) People of color already encounter racial profiling at airports, and this policy has the potential to make these problems worse. Our current political circumstances make discrimination even more likely. Heated political rhetoric has made life more challenging for Muslims and Chinese people, especially at airports. Further, concerns about being misidentified by AI airport security may create a chilling effect on travel for members of these groups, constituting a form of systemic racial oppression.

Those who defend the system point out that travelers can opt out of facial recognition by simply saying, “Please don’t take my photo.” If this is the case, the argument is that the government isn’t really violating people’s autonomy — they have the right to say “no.” There are, however, a number of responses to this argument. First, travelers may be concerned about what might happen to them if they refuse to comply. Travel is a critical human need, especially as our experience is increasingly globalized and our loved ones and livelihoods are more likely to be scattered across states, countries, and even continents. If a person is detained by security, they might miss the birth of a child or saying goodbye to a dying relative. The circumstances at airports are inherently coercive and people might be deeply concerned that they won’t get to their location unless they go along. Second, a person may have a right to say “no” as a matter of policy, but it is very unlikely that any particular passenger will know that they have it. Finally, a person is unlikely to want to make waves, delay other travelers, and potentially embarrass themselves. If a “right of refusal” policy is coercive and lacks transparency, citizens cannot give fully free and informed consent.

Like so many recent developments in technology, facial recognition motivates questions about authority and political legitimacy. Who gets to make these decisions and why? The answers to these questions are far from obvious. Allowing those who stand to gain the most power or earn the greatest profit to dictate protocols seems like a bad idea. Instead, we may have to trust our elected representatives to craft policy. The problem with this approach is that, for many legislators, winning re-election takes precedence over any policy issue, no matter how dire. We need look no further than lack of progress on climate policy to see that this is the case. Alternatively, we could bring questions of the greatest existential import to public referendum and decide them by a direct democratic process. The problem with this is the standard problem for democracy posed by philosophers for decades — the population as a whole can be woefully underinformed and act tyrannically.

One lesson that we’re left with is that we shouldn’t let these major changes blow by without comment or criticism. It’s easy to adopt a kind of cynicism that causes us to believe in technological determinism — the view that any development that can happen will happen. But policies are made by people. And one of the most important roles that sound public philosophy can play is to demand justification and ensure that policy is supported by deliberate and defensible moral principles.

Nowhere to Hide: Extracting DNA from Air, Water, and Sand

photograph of gloved hand taking water sample

David Duffy and his team from the University of Florida recently discovered a groundbreaking method for tracking the health and whereabouts of sea turtles. As the turtles represent an endangered species, the scientists’ goal was to study their migration patterns and to identify the environmental factors that might be influencing their health and well-being. Researchers found that they were able to extract meaningful DNA samples from air, water, and sand at the beach. Those samples allowed researchers to draw conclusions about sub-populations and to test for the presence of pathogens that lead to a particularly deadly form of cancer in sea turtles.

The discovery that significant DNA information could be extracted from these sources is great news for conservation scientists as well as for people who care about the preservation and well-being of animals more broadly. Scientists can use genetic information about animals without disturbing them in their natural habitats; they can wait until an animal has vacated a space before using the genetic material left behind to learn more about the creature and that creature’s community.

Researchers also learned something with more controversial consequences. Meaningful amounts of human DNA were extracted from air, water, and sand as well — amounts of DNA that can pick out the genetic code of specific individuals. This means that human beings, like other animals, leave behind genetic information essentially everywhere we go. This discovery gives rise to many important moral questions.

One such question is: who owns discarded pieces of a person’s body? Does the person still have some rights of ownership over physical matter that comes from their own body? If so, do these ownership rights entail a corresponding right to decide what can be done with the matter? Or, instead, are discarded cells like trash — once we’ve shed them, we no longer have any reasonable claim to ownership over them? Should we adopt a “finders keepers” attitude when it comes to discarded genetic material?

One response may be that treating small bits of discarded material as part of a person’s body is impractical and unrealistic. If shedding cells is something we do everywhere we go, there can be no returning discarded cells. At that point, the living source has lost any control. It might be tempting to think that there isn’t much at stake here.

That said, humans don’t have the best track record when it comes to using genetic material in morally responsible ways. For example, in one famous case, a woman named Henrietta Lacks consented to a biopsy as part of her cancer treatment. Scientists used her genetic material for research and found that her cells — now called HeLa cells — had remarkable properties that led to major advances in medical treatment. For decades, Lacks’ family was not compensated in any way for their matriarch’s contribution. One reason to be concerned about Duffy’s discovery is that a person’s cells could easily be used to profit others without any compensation accruing to the source. If this is the case, a person’s discarded genetic material may just be a new capitalist frontier to commodify and exploit.

But there are other reasons to be concerned that genetic information will be misused. For instance, in the late 1980s, members of the Havasupi Tribe provided their genetic material for the purposes of studying Type II Diabetes, a condition from which many members of the tribe suffered. Unbeknownst to the donors, the genetic information was used to research migration patterns, inbreeding, and schizophrenia within the tribe. Migration studies of tribal members, in particular, could potentially disrupt the already tenuous relationship that Native Persons have with the land and provide another avenue for governmental exploitation. When genetic material is collected or used without consent, it can lead to further discrimination and racism.

In addition to these concerns, we also tend to think that a person is entitled to privacy when it comes to details about their own body. When we shed our DNA, we don’t do so intentionally; we don’t give consent. But if an institution or individual was able to extract DNA from a location where we unwittingly shed it, they could come to know all kinds of details about any of us. The right to privacy begins within the borders of one’s own body even if those borders might shift or extend.

Then, of course, there are the implications for forensic science. Since its discovery, DNA has changed the landscape in criminal justice. There is no doubt this has had some tremendous positive consequences. Killers who had gone free for decades to commit all sorts of atrocities were eventually captured using DNA, sometimes through the use of unconventional methods. That said, the presence of DNA is not always evidence that a specific individual committed a crime. Sometimes context gets lost when DNA evidence is found. Finding a person’s DNA at a scene, even when there is a harmless explanation for that fact, can blind investigators to other explanations and prevent them from looking into other viable suspects whose DNA was, for whatever reason, not extracted.

Duffy’s discovery encourages speculation about a future in which it is impossible to get away with committing a crime — one in which there will always be genetic evidence to connect a person to a scene at the time a crime was committed. In such a world, we might wonder, what happens to Fourth Amendment rights? We might be looking at a future in which the genetic tapestry of any space is, in a sense, open access. In such a world, what would it mean for search and seizure to be “unreasonable”?

Finally, we can ask the question about this technology that we find ourselves asking over and over in this age: is this knowledge worth pursuing, or are we opening Pandora’s Box which can never be closed? We tend to treat all technological knowledge as intrinsically valuable, as if we are always justified in pursuing new frontiers. It may be the case, however, that some knowledge is not worth having, such as the number of blades of grass on a lawn or the number of grains of sand on a beach. Other knowledge is worse than neutral or useless, it is all things considered harmful. Consider, for example, knowledge of how to construct biological weapons or weapons of mass destruction. We treat pursuit of this kind of information as if it is inevitable, but it really isn’t. Should we view ourselves as subject to some kind of irresistible technological determinism such that if it is possible to create new tech, we are powerless to stop it? Instead, we might do well to consider carefully the implications of our discoveries and regulate the technology in ways that respect fundamental values.

State of Surveillance: Should Your Car Be Able to Call the Cops?

close-up photograph of car headlight

In June, 2022, Alan McShane from Newcastle, England was heading home after a night drinking and watching his favorite football club at the local pub when he clipped a curb and his airbags were activated. The Mercedes EQ company car that he was driving immediately called emergency services, a feature that has come standard on the vehicle since 2014. A sobriety test administered by the police revealed that the man’s blood alcohol content was well above the legal limit. He was fined over 1,500 pounds and lost his driving privileges for 25 months.

No one observed Mr. McShane driving erratically. He did not injure anyone or attract any attention to himself. Were it not for the actions of his vehicle, Mr. McShane may very well have arrived home safely and without significant incident.

Modern technology has rapidly and dramatically changed the landscape when it comes to privacy. This is just one case among many which demonstrates that technology may also pose threats to our rights against self-incrimination.

There are compelling reasons to have technology of this type in one’s vehicle. It is just one more step in a growing trend toward making getting behind the wheel safer. In the recent past, people didn’t have cell phones to use in case of an emergency; if a person got in a car accident and became stranded, they would have to simply hope that another motorist would find them and be willing to help them. However, this significant improvement to safety isn’t always accessible during a crash. One’s phone may not be within arm’s reach and during serious car accidents a person may be pinned down and unable to move. Driving a car that immediately contacts emergency services when it detects the occurrence of an accident may often be the difference between life and death.

Advocates of this technology argue that a person simply doesn’t have the right to drive drunk. It may be the case that under many circumstances a person is free to gauge the amount of risk that is associated with their choices and then choose for themselves the amount that they are willing to take on. This simply isn’t true when it comes to risk that affects others in serious ways.

A person doesn’t have the right to just cross their fingers and hope for the best — in this case to simply trust that they don’t happen to encounter another living being while driving impaired.

When people callously rely on luck when it comes to driving under the influence, living beings can die or be injured in such a way that their lives are involuntarily altered forever. Nevertheless, many people simply do not think about the well-being of others when they make their choices. Since this is the case, some argue that if technology can protect others from the selfish and reckless actions of those who can’t be bothered to consider interests other than their own, it should.

Others argue that we can’t let technology turn any country into a police state. Though such people agree that there are clear safety advantages to technology that can help a person in the event of an accident, this particular technology does more than that — it serves as a non-sentient witness against the driver. This radically changes the role of the car. A vehicle may once have been viewed as a tool operated by a person — a temporary extension of that person’s body. Often cars used as tools in this way are the property of their operators. Until now, a person’s own property hasn’t been in the position to turn them in. Instead, if a police officer wanted information about some piece of a person’s body, they’d need a search warrant. This technology removes the element of choice on behalf of the individual when it comes to the question of whether they want to get the police involved or to implicate themselves in a crime.

This is far from the only technology we have to be worried about when it comes to police encroachment into our lives and privacy. Our very movement through our communities can be tracked by Google and potentially shared with police if we agree to turn location services on when using our phones.

Do we really have a meaningful expectation of privacy when all of the devices we use as extensions of our bodies are accessible to the police?

Nor is it only the police that have access to this information. In ways that are often unknown to the customer, information about them is frequently collected and used by corporations and then manipulated to motivate that customer to spend more and more money on additional products and services. Our technology isn’t working only for us, it’s also working for corporations and the government, sometimes in ways that pretty clearly run counter to our best interests. Some argue that a product on which a person spends their own hard-earned money simply shouldn’t be able to do any of this.

What’s more, critics argue that the only conditions under which technology should be able to share important information with any third party is if the owner has provided fully free and informed consent. Such critics argue that what passes for consent in these cases is nowhere near what would be required to meet this standard.

Accepting a long list of terms and conditions written in legalese while searching for crockpot recipes at the grocery store isn’t consenting to allowing police access to knowledge about your location.

Turning a key in the ignition (or, more and more often, simply pressing an ignition button) does not constitute consent to abandon one’s rights against self-incrimination or to make law enforcement aware of one’s blood alcohol content.

Advocates of such technology argue in response that technology has always been used as important evidence in criminal cases. For instance, people must be careful what they do in public, lest it be captured on surveillance cameras. People’s telephone usage has been used against them since telephones were invented. If one does not want technology used against them in court, one shouldn’t use technology as part of the commission of a crime.

In response, critics argue that, as technology develops, it has the potential to erode our fourth amendment rights against unlawful search and seizure and our fifth amendment rights against self-incrimination to the point of meaninglessness. Given our track record in this country, this erosion of rights is likely to disproportionately affect marginalized and oppressed populations. It is time now to discuss principled places to draw defensible lines that protect important democratic values.

Constitutional Deadlock Over Privacy: A Third Way?

photograph of protest sign in fron of Supreme Court

Following the overturning of Roe v. Wade, a great deal of media attention has been focused on what comes next. The right to an abortion, granted by the original landmark case, was founded on the basis of a constitutional right to privacy. But it has already been made clear that similar rulings regarding a constitutional right to privacy, such as Griswold v. Connecticut could be at risk of being overturned as well. In addition, the Supreme Court has attracted controversy for several other controversial decisions as well, prompting proposals for how to reform the Court or how to reverse these decisions. But with confidence in the courts falling to historic lows, many such proposals would likely only make the situation worse and undermine confidence in the courts even more.

Perhaps it is time to stop worrying about what policies we want courts to protect and to start thinking about finding broad support for changes in process in the form of constitutional amendments.

The recent decision from the Supreme Court regarding abortion combined with rulings on school prayer, concealed guns, voting rights, and worries about future rulings once again reignite debates about whether and how the Supreme Court should be reformed. The impeachment of justices who some feel misled Congress has been floated, and the topic of court-packing has resurfaced again. The constitution does not specify the number of judges on the Court, so Congress could simply pass legislation creating more positions and then have those positions be filled by left-leaning justices to re-balance the Court. Term limits for Supreme Court justices would mean that there would be more turn over, preventing the Court from becoming too ideologically lopsided.

In addition to proposing reforms to the Courts’ makeup, some have proposed reforms to the powers of the Court. Some now propose that Congress strip the Supreme Court of its jurisdiction for hearing certain kinds of cases, or that legislation could be passed requiring a supermajority of justices to strike down federal laws. It has even been suggested that if a particularly controversial ruling comes from the Court that Congress or the President simply ignore it, under the constitutional theory known as departmentalism which holds that each branch of government may decide on its own how to interpret the Constitution. In addition, there are several proposals to create mechanisms for Congress to override the Court if it wanted to, not unlike Canada’s notwithstanding clause.

While many of these proposals might appease in some areas, they all have problems when it comes to putting them into practice.

After all, abortion rights proponents now find themselves in the same position as anti-abortion advocates did in the 1970s, and it took almost 50 years for them to get what they wanted. Proposals like court-packing simply do not have enough support.

It is important to note that much of the Supreme Court’s power is based on the confidence the public has in it. The Constitution does not prescribe many powers to the Supreme Court, and even its power of judicial review is based on the precedent Marbury v. Madison, and as it has become all too clear that precedents are not set in legal stone. If people do not feel like the Court is impartial, they will be less inclined to heed its pronouncements. While some would like to see justices impeached or the court packed, this would only serve to undermine the confidence in the Court from those on the right, likely prompting retaliatory measures. This would weaken perceptions of impartiality of the Court even more, effectively transforming the Supreme Court into a very exclusive legislature.

Meanwhile, having Congress override the Courts’ decisions risks undermining the commitment to minority rights.

Fundamental protections would become a flimsy thing, being reversed whenever the opposing party comes to power. Limiting the High Court’s jurisdiction risks similar problems, simply offloading the same basic problem to an alternative body that the parties will shape so as to achieve their preferred policy objectives. All these efforts to manipulate the judicial system in order to secure specific political outcomes will only undermine overall public confidence in the Court.

Perhaps an alternative to such a standoff is to stop thinking about desired result we wish courts to deliver and start thinking about broader legal principles to embed in the constitution that could appeal to people on all sides of the spectrum. The legal issue underlying so many contentious issues like Roe v. Wade is the issue of privacy. Abortion opponents charge that because privacy isn’t explicitly established in the Constitution, it isn’t protected. Rather than dealing with legal debates about implied rights, why not amend the Constitution to explicitly include privacy rights? Polls show that a vast majority of Americans are concerned about privacy issues. And with the rise of surveillance capitalism, and of AI accessing vast datasets, there may be room for broad support for proposals to embed some kind of privacy protections in the constitution.

While getting the support needed for constitutional amendments is difficult (the last amendment was ratified in 1992), the increasing importance of privacy to broad segments of American society may create room for bargaining and compromise on these issues by both the left and the right. Recently, constitutional-law David French opined that the Court’s overturning of Roe v. Wade may actually help de-polarize America. Because the pro-life vs. pro-choice debate largely centered around Roe v. Wade, sides had to defend a precedent, not a specific policy. But as French observes,

Is there a hope that you would have something along the lines of a democratic settlement to the issue that makes abortion so much less polarizing in other countries around the world? Europe, for example has long had more restrictive abortion laws than the United States, but the United States couldn’t move to a European settlement because Roe and Casey prohibited that.

Indeed, polls show that Americans have fairly nuanced views when it comes to abortion. Few people would favor an outright ban on the procedure, so it may not be so difficult to imagine a compromise proposal for adding privacy to the Constitution that would not only protect abortion rights, but other rights like access to contraception, gay marriage, and protections from online surveillance. Such a move would not only allow Americans to address newly emerging privacy issues but also settle old disputes. Abortion rights passed through constitutional amendment would also have a legitimacy that Roe never did amongst abortion opponents, preventing back-and-forth sniping at the Court for not upholding preferred policies.

While a constitutional amendment would take time and a lot of negotiation, it may yield a far more stable and broadly satisfying solution to the abortion debate compared to the previous alternatives while not undermining confidence in the Court system itself. So instead of looking to courts to reach specific policy outcomes, perhaps the attention should be focused on building coalitions of support for broad legal principles that people can agree on.

Florida’s “Don’t Say Gay” Bill and Parental Rights

photograph of school girl sent out of class

On Tuesday, March 8th, the Florida Senate passed H.B. 1557, following its approval by the Florida House. It’s now just a signature from Governor Ron DeSantis away from becoming law. Opponents have labeled it the “Don’t Say Gay” bill due to a proposed, but withdrawn, amendment that would potentially require teachers to “out” LBGTQ+ students to their parents. Defenders of the bill argue that this is misrepresentation; Gov. DeSantis has framed the bill as defending the rights of parents to not have young children indoctrinated, and some defenders, including Gov. DeSantis’ spokesperson Christina Pushaw, have said the bill is about preventing “grooming” of children, insinuating that critics are pedophiles or enablers.

To get a better understanding of this measure, we should ignore the noise and go directly to its heart. What does the law actually say? Troublingly, not very much. The law is seven pages, two and a half of which are preamble. The law requires schools to develop policies on notifying parents of changes in their child’s “mental, emotional or physical health or well-being.” In addition, the bill forbids school officials from encouraging students to withhold information about these matters from their parents.

However, the lightning rod for controversy is this sentence:

Classroom instruction by school personnel or third parties on sexual orientation or gender identity may not occur in kindergarten through grade 3 or in a manner that is not age-appropriate or developmentally appropriate for students in accordance with state standards.

Let’s break it down. There are two clauses separated by an “or.” So, each of these clauses is introducing a unique requirement. The first clause outright forbids “classroom instruction” for K-3 grade students on “sexual orientation or gender identity.” The second clause requires that all discussions from 4th grade onward are “age-appropriate.” Clearly, the bill does more than prohibit discussing sexuality with kindergarteners.

The trouble is that none of these terms are defined. There is no explanation of what “instruction” consists of and how it differs from, say, a discussion. Further, lines 21-23 of the bill’s preamble state that it is intended to prohibit discussion, creating internal incoherence about the goals. It contains no description or suggestion of what age-appropriate instruction would look like. There’s no statement about the kind of “change” in students’ “mental, emotional, or physical health or well-being” that might require teachers to inform parents.

Critics argue that the bill is designed to chill all discussion of gender identity and sexuality in schools through this vagueness. The bill does not set up criminal or misdemeanor punishments for violators. Instead, like the recent Texas abortion law, it gives parents the right to file suit against any school district or official that they believe violates the bill’s demands. Lawsuits are expensive and time consuming. Thus, many school officials would, justifiably, avoid engaging in behavior that could trigger a lawsuit.

So, critics offer scenarios like the following: Imagine a 1st grade classroom. One student, the child of two gay men, makes a comment about her dads. A confused student asks the teacher why her classmate has two dads when she only has one. Even though this isn’t instruction, the teacher may want to immediately squelch this conversation – a student could go home, say that she learned some families have two dads but no mom, and an upset parent may file suit. For similar reasons, any school officials who are members of the LGBTQ+ community may believe that they must hide this part of their identity from students.

This criticism is important – it gives us serious reason to question the bill. Especially when considering the larger cultural context. However, even if this bill made no references to sexuality and gender identity, it would still contain something very problematic. This was revealed through an exchange on the floor of the Florida Senate. Senator Lori Berman asked if a school would be required to inform parents that their child requested vegetarian lunches. Senator Dennis Baxley, the bill’s sponsor, gave a non-answer in response – he merely repeated that parents should not be kept in the dark. This is, to me, quite telling of the bill’s intent.

Parental rights regarding education have become a hot topic in recent months. However, most of these discussions have dealt with rights that parents have against institutions, namely, the right to know about, and reject, contents of the curriculum. Very little has been said about what rights parents have against their children, in comparison. H.B. 1557 gives a strong picture of parental rights – parents have a broad right to be told even what their children do not want to tell them. And the way the bill is framed seems to give parents the right to know whenever their child is engaged in questioning values.

Consider this case. A student in a 10th grade U.S. history class learns about the three-fifths compromise. She raises her hand and expresses some distress. She is deeply upset to learn that people were used as pawns for political purposes – representatives from Northern states literally did not want slaves counted among people, while Southern representatives wanted slaves counted as persons for the purposes of political power, but not in any way that would benefit the slaves. The student has a hard time reconciling this with the values of freedom and equality that purportedly motivated the Founding Fathers and feels that her image of the nation is shaken.

H.B. 1157 seems to require that the teacher report this distress to the student’s parents. Distress could be a change in her “psychological well-being” especially when this concept is left undefined. But I think this overstates the rights that parents have over their children. Even children, especially adolescents, should have some rights to privacy.

Although not yet full adults, in a biological or psychological sense, adolescents are in the process of discovering who they are and express agency while they do so. Part of this process involves questioning, in particular the questioning of values. This is often a painful and upsetting process. Like the experience of physical growing pains, the process of figuring out who you are by sloughing away what you are not can produce serious discomfort. If a young adult does not invite their parent(s) into this process, there is a reason for this – they do not view their parent(s) as able to constructively contribute to the process of self-discovery. This right to control who they invite into their process of self-building should be respected.

The point of H.B. 1557 seems to go well-beyond its restrictions on instruction of sexuality and gender issues. The proposal stands to further stifle the space that adolescents have available to them to question the world and their place in it. It threatens to turn schools into a surveillance apparatus; school officials are now tasked with closely monitoring students and reporting any behaviors relevant to “critical decisions” to their parents. If defenders of the bill are correct and it is indeed just a way of respecting parental rights, then it does so at the expense of children’s rights.

Ultimately, as Rachel Robinson-Greene argued in an earlier post, this may reveal a disagreement about the purpose of education. For those that view education as the transmission of information with a goal of job training, school is obviously not the place for questioning. But if we view education as training adolescents to be citizens in a pluralistic democracy, to think critically, to understand themselves and justify themselves to others, or even as a form of liberation, then schools should allow young people the space to critically reflect on the world, even if this clashes with the values of their parents.

Defenders of parental rights often view themselves as protecting their children from indoctrination. But thinking that your child was indoctrinated because they do not share your values ignores a basic tenant of democratic society – that reasonable people may value different things and come to different conclusions when presented with the same information.

Do Politicians Have a Right to Privacy?

photograph of Matt Hancock delivering press briefing

On Friday, June 25th, 2021, British tabloid The Sun dropped a bombshell: leaked CCTV images of (then) UK Health Secretary, MP Matt Hancock, kissing a political aide in his office. Video footage of the pair intimately embracing rapidly circulated on social media. Notably, the ensuing outrage centered not on the fact that Hancock was cheating on his wife (lest we forget, Prime Minister Boris Johnson is himself a serial offender), but on the hypocrisy of Hancock breaching his own social distancing guidelines. By the next day, with his position looking increasingly untenable, Hancock resigned. Thus, the man who had headed up the UK’s response to the COVID-19 pandemic over the past 18 months was toppled by a single smooch.

In the wake of this political scandal, it is useful to take a step back and consider the ethical issues which this episode brings to light. Following the release of the video, Hancock pleaded for “privacy for my family on this personal matter.” What is privacy, and why is it valuable? Does a distinct right to privacy exist? Do politicians plausibly waive certain rights to privacy in running for public office? When (if ever) can an individual’s right to privacy be justifiably infringed, and was this the case in the Hancock affair?

It is widely accepted that human beings have a very strong interest in maintaining a hidden interior which they can choose not to share with others. The general contents of this interior will differ widely between cultures; after all, what facts count as ‘private’ is a contingent matter which will vary depending on the social context. Nevertheless, according to the philosophy professor Tom Sorell, this hidden interior can roughly be divided into three constituents (at least, in most Western contexts): the home, the body, and the mind.

There are a plethora of reasons as to why privacy is important to us. For instance, let us briefly consider why we might value a hidden psychological interior. Without the ability to shield one’s inner thoughts from others, individuals would not be able to engage in autonomous self-reflection, and consequently would be a different self altogether. Moreover, according to the philosopher James Rachels, the ability to keep certain aspects of ourselves hidden is essential to our capacity to form a diverse range of interpersonal social relationships. If we were always compelled to reveal our most intimate secrets, then this would not only devalue our most meaningful relationships, but would also make it impossible to form less-intimate relationships such as mere acquaintances (which I take to be valuable in their own right).

There is considerable debate over whether a distinct right to privacy exists. As the philosopher Judith Jarvis Thomson famously noted, “perhaps the most striking thing about the right to privacy is that nobody seems to have any very clear idea what it is.” According to Thomson, this can be explained by the fact that our seeming ‘right’ to privacy is in fact wholly derivative of a cluster of other rights which we hold, such as rights over our property or our body; put another way, our interest in privacy can be wholly attributed to our interest in other goods which are best served by recognizing a discrete, private realm, such that we have no separate interest in something called ‘privacy’.

Suppose that a right to privacy does in fact exist. Can this right to privacy be (i) waived, (ii) forfeited, or (iii) trumped? Let us go through each in turn. A right is waived if the rights-holder voluntarily forgoes that right. Many people believe that certain rights (for instance, the right not to be enslaved) cannot be voluntarily waived. However, intuitively it would seem that privacy is not such an inalienable right: there are plenty of goods which we may legitimately want to trade privacy off against, such as our ability to communicate with others online. It could be argued that, in choosing to run for public office, politicians waive certain rights to privacy which other members of the public retain, since they do so in the knowledge that a certain degree of media scrutiny is a necessary part of being a public servant. Perhaps, then, Hancock had waived his right to keeping his sexual life private, in virtue of having run for public office.

A right is arguably forfeited if the rights-holder commits certain acts of wrongdoing. For instance, according to the so-called rights forfeiture theory of punishment, “punishment is justified when and because the criminal has forfeited her right not to be subjected to this hard treatment.” For those who endorse this (albeit controversial) view, it could perhaps be thought that Hancock forfeited his right not to have this sexual life publicized, in virtue of having culpably committed the wrongdoing of breaching social distancing guidelines and/or hypocrisy.

Finally, can a right to privacy be trumped? Philosophers disagree about whether it is coherent to talk about rights ‘trumping’ one another. According to the philosopher Hillel Steiner, rights comprise a logically compossible set, meaning that they never conflict with one another. By contrast, philosophers such as Thomson maintain that rights can and do conflict with each other.

Suppose that we think that the latter is true. In an instance where an agent’s right to privacy conflicts with the right of another agent, we must determine whose interests are weightier and give them priority. In the case of the Hancock saga, it could be said that there was a strong public interest in knowing that the Health Secretary had breached his own social distancing guidelines. However, the mere existence of a public interest in knowing this information is not sufficient to generate a right on behalf of the public to find out this information; moreover, even if it did, this would not necessarily trump the right of the individual politician to privacy.

So, did the leaking of the CCTV footage breach Hancock’s right to privacy? And if so, were the newspaper reports nevertheless justified on balance? My own view is that Hancock had neither waived nor forfeited his right to privacy, and that his right to privacy was not trumped by other considerations – that is to say, I think that the leaking of the footage wronged Hancock in some way. Nevertheless, I have complete sympathy with the subsequent public reaction to the newspaper reports. Throughout the pandemic, many facts which had previously been regarded as paradigmatically ‘private’ (such as whether one was sexually active, and with whom) were suddenly subject to a very high degree of public intrusion. Set against this backdrop, the Hancock affair served as yet another instance of “one rule for the establishment, another for everyone else.”

In the Limelight: Ethics for Journalists as Public Figures

photograph of news camera recording press conference

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Journalistic ethics are the evolving standards that dictate the responsibilities reporters have to the public. As members of the press, news writers play an important role in the accessibility of information, and unethical journalistic practices can have a detrimental impact on the knowledgeability of the population. Developing technology is a major factor in changes to journalism and the way journalists navigate ethical dilemmas. Both the field of journalism, and its ethics, have been revolutionized by the internet.

The increased access to social media and other public platforms of self-expression have expanded the role of journalists as public figures. The majority of journalistic ethical concerns focus on journalists’ actions in the scope of their work. As the idea of privacy changes, more people feel comfortable sharing their lives online and journalists’ actions outside of their work come further under scrutiny. Increasingly, questions of ethics in journalism include journalists’ non-professional lives. What responsibilities do journalists have as public-facing individuals?

As a student of journalism, I am all too aware that there is no common consensus on the issue. At the publication I write for, staff members are restricted from participating in protests for the duration of their employment. In a seminar class, a professional journalist discussed workplace moratoriums they’d encountered on publicly stating political leanings and one memorable debate about whether or not it was ethical for journalists to vote — especially in primaries, on the off-chance that their vote or party affiliation could become public. Each of these scenarios stems from a common fear that a journalist will become untrustworthy to their readership due to their actions outside of their work. With less than half the American public professing trust in the media, according to Gallup polls, journalists are facing intense pressure to prove themselves worthy of trust.

Journalists have a duty to be as unbiased as possible in their reporting — this is a well-established standard of journalism, promoted by groups like the Society for Professional Journalists (SPJ). How exactly they accomplish that is changing in the face of new technologies like social media. Should journalists avoid publicizing their personal actions and opinions and opt-out of any personal social media? Or should they restrict them entirely to avoid any risk of them becoming public? Where do we draw the lines?

The underlying assumption here is that combating biased reporting comes down to the personal responsibility of journalists to either minimize their own biases or conceal them. At least a part of this assumption is flawed. People are inherently biased; a person cannot be completely impartial. Anyone who attempts to pretend otherwise actually runs a greater risk of being swayed by these biases because they become blind to them. The ethics code of the SPJ advises journalists to “avoid conflicts of interest, real or perceived. Disclose unavoidable conflicts.” Although this was initially written to be applied to journalists’ professional lives, I believe that that short second sentence is a piece of the solution. “Disclose unavoidable conflicts.” More effective than hiding biases is being clear about them. Journalists should be open about any connections or political leanings that intersect with their field. It truly provides the public with all the information and the opportunity to judge the issues for themselves.

I don’t mean to say that journalists should be required to make parts of their private lives public if they don’t intersect with their work. However, they should not be asked to hide them either. Although most arguments don’t explicitly suggest journalists hide their biases, they either suggest journalists avoid public action that could reveal a bias or avoid any connection that could result in a bias — an entirely unrealistic and harmful expectation. Expecting journalists to either pretend to be bias-free or to isolate themselves from the issues they cover as much as possible results in either dishonesty or “parachute journalism” — journalism in which reporters are thrust into situations they do not understand and don’t have the background to report on accurately. Fostering trust with readers and deserving that trust should not be accomplished by trying to turn people into something they simply cannot be, but by being honest about any potential biases and working to ensure the information is as accurate as possible regardless.

The divide between a so-called “public” or “professional” life and a “private” life is not always as clear as we might like, however. Whether they like it or not, journalists are at least semi-public figures, and many use social media to raise awareness for their work and the topics they cover, while also using social media in more traditional, personal ways. In these situations, it can become more difficult to draw a line between sharing personal thoughts and speaking as a professional.

In early 2020, New York Times columnist Ben Smith wrote a piece criticizing New Yorker writer Ronan Farrow for his journalism, including, in some cases the exact accuracy or editorializing of tweets Farrow had posted. Despite my impression that Smith’s column was in itself inaccurate, poorly researched and hypocritical, it raised important questions about the role of Twitter and other social media in reporting. A phrase I saw numerous times afterwards was “tweets are not journalism” — a criticism of the choice to place the same importance on and apply the same journalistic standards to Farrow’s Twitter account as his published work.

Social media makes it incredibly easy to share information, opinions, and ideas. It is far faster than many other traditional methods of publishing. It can, and has been, a powerful tool for journalists to make corrections and updates in a timely manner and to make those corrections more likely to be viewed by people who already read a story and might not check it again. If a journalist intends them to be, tweets can, in fact, be journalism.

Which brings us back to the issue of separating public from private. Labeling advocacy, commentary, and advertisement (and keeping them separated) is an essential part of ethical journalism. But which parts of these standards should be extrapolated to social media, and how? Many individuals will use separate accounts to make this distinction. Having a work account and personal account, typically with stricter privacy settings, is not uncommon. It does, however, prevent many of the algorithmic tricks people may use to make their work accessible, and accessibility is an important part of journalism. Separating personal and public accounts effectively divides an individual’s audience and prevents journalists from forming more personal connections to their audience in order to publicize their work. It also prevents the engagement benefits of more frequent posting that comes from using a single account. By being asked to abstain from a large part of what is now ordinary communication with the public, journalists are being asked to hinder their effectiveness.

Tagging systems within social media currently provide the best method for journalists to mark and categorize these differences, but there’s no “standard practice” amongst journalists on social media to help readers navigate these issues, and so long as debates about journalistic ethics outside of work focus on trying to restrict journalists from developing biases at all, it won’t become standard practice. Adapting to social media means shifting away from the idea that personal bias can be prevented by isolating individuals from the controversial issues, rather than helping readers and journalists understand, acknowledge, and deconstruct biases in media for themselves by promoting transparency and conversation.

What Good Is Ignorance?

photograph of single person with flashlight standing in pitch darkness

Most of us think knowledge good, and ignorance bad. We justify this by pointing to all the practical goods that knowledge affords us: we want the knowledgeable surgeon and legislator, and not the ignorant ones. The consequences of having the latter are potentially dire. And so, from there, many people blithely assume ignorance is bad: if knowing is good, not knowing should be avoided.

What’s striking though is that people’s actions often don’t match their words: they will pay lip service to the value of knowledge, yet choose to remain ignorant despite having relatively easy access to know more or know better. The actions of these folks suggests that there is something they must value about ignorance — or, perhaps, they think gathering knowledge is more trouble than it’s worth. Part of the explanation here is no doubt that people are lazy — they are, to put the point more precisely, cognitive misers. However, we should be suspicious of one-factor explanations of complicated behavior. And knowledge looks like it is subject to the Goldilocks principle: we don’t want too little knowledge, but we don’t want too much knowledge either. Do you really want to know everything there is to know about the house you bought? Of course you don’t. While you want to know, say, whether the roof is in good condition, and the foundation is sound, you don’t care exactly how many specks of dust are in the attic. And just as we can oftentimes overstate the value of knowledge, we can understate the value of ignorance too: it turns out, there are some benefits to knowing less. We should canvass several of them.

First, consider the value of flow states: flows states are states where we have intense focus and concentration on the task at hand in the present moment; the merging of action and awareness, and the loss of self-reflection — what people often describe as ‘being in the zone.’ Flow states allow us to achieve amazing things whether in the corporate boardroom, the courthouse, or the basketball court, and many other tasks in-between. We may wonder how flow states are related to ignorance. Here we must understand what is required to be in a flow state: intensive and focused concentration on what one is doing in the present moment; the loss of awareness that one is engaging in a specific activity, among other things. When we’re in a flow state, while writing, say, we focus to the point of immersion into the writing process, inhibiting knowledge of what we’re doing. We do not focus on the keystrokes necessary to produce the words on the page or think too much about the next sentence to come. Athletes often describe how it feels to be in a flow state in similar terms.

Next, consider the value of privacy where we value the ignorance of others. we often value privacy — ignorance of our words and actions by others — performatively, even if we may say things dismissive of privacy. When the issue of state surveillance is broached, some retort that they don’t fear the state knowing their business since they’ve done nothing wrong. The implication here is that only criminals, or folks up to no good, would value their privacy; whereas, law-abiding citizens have nothing to fear from the state. Yet their actions belie their words: they password-protect their account, use blinds and curtains to prevent snooping into their homes, and so on. They, in other words, intuitively understand that privacy is valuable for leading a normal life having nothing to do with criminality. The fact that they would be reticent to forgo their privacy says volumes about what they really value, despite their expressed convictions to the contrary. We can think about the value of privacy by thinking about a society where privacy is absent. As George Orwell masterfully put the point:

“There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live—did live, from habit that became instinct—on the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.”

And finally, sometimes we (rightly) value our ignorance of other people, even those closest to us. Would you really want to know everything about people in your life — every thought, word, and deed? I’m guessing for most folks the answer is no. As the philosopher, Daniel Dennett, nicely explains:

“Speaking for myself, I am sure that I would go to some lengths to prevent myself from learning all the secrets of those around me—whom they found disgusting, whom they secretly adored, what crimes and follies they had committed, or thought I had committed! Learning all these facts would destroy my composure, cripple my attitude towards those around me.”

We thus have a few examples where ignorance — in different forms — is actually quite valuable, and where we wouldn’t want knowledge. This is some confirmation for the Goldilocks principle applied, not just to knowledge, but to ignorance too (stated in reverse): we don’t want too much ignorance, but we don’t want too little ignorance either.

Medical Privacy and the Public’s Right to Know

photograph of President Trump with face mask giving thimbs up from within SUV

When I first heard that President Donald Trump tested positive for COVID, I began following the regular updates about his condition. I read through the updates from his doctor, I checked the dates on his previous COVID tests to determine when he was probably infected, I cross-referenced the results of his medical tests with what actuarial data exists for COVID cases, and I regularly checked in to see if President Trump’s condition was improving or worsening.

Or, more precisely, those were all things I wanted to do. Why didn’t I do them? Because the White House did not release detailed medical information. We got some minimal updates about the president’s conditions, but those updates were devoid of specifics and often inconsistent.  This bothered me, and I immediately added this to the tally of ways the Trump administration has been insufficiently transparent.

I was upset with the White House, I felt they were doing something wrong by not being more transparent. I felt like I had a right to know about my president’s health condition! Not only that, I felt I had a right to know about the health condition of a current presidential candidate less than a month from the election. I felt that President Trump, as both the sitting president and a presidential candidate, did something wrong by not releasing details of his condition to the public.

But is my feeling that the president had such an obligation of transparency correct? There is an extensive academic discussion of this very question. To what extent do candidates retain rights to medical privacy, and to what extent does the public gain a sort of moral right to medical transparency? That is the question I want to consider here.

Normally people do not have an obligation to disclose private medical information. That is true even if that information is materially significant to others. Certain illnesses could perhaps compromise my ability to teach at FSU — FSU thus has a real interest in knowing the results of my medical tests. That does not mean I have an obligation to send FSU that information. If I decide I can no longer do my job, then I should let FSU know and possibly I should resign. But if I think I can continue my work, FSU does not have some right to the same medical information so that they can make their own determination. My interest in medical privacy supersedes their interest in my medical history. FSU can fire me if the quality of my work suffers. But they don’t have a right to my medical information so that they can preemptively decide if they think my work will suffer.

Now, there are limits to our medical privacy. If diagnosed with HIV, one ought to disclose that diagnosis to prior sexual partners. Similarly, in cases of medical emergency, the state might need to violate medical privacy to prevent the spread of a highly infectious disease. Even in those cases, however, it is still reasonable to maintain as much privacy as possible. Following a COVID diagnosis, you might be justified in telling people I’ve been around that they may have been exposed to COVID-19. But that would not justify you telling others whether or not I was given supplemental oxygen.

So what explains my intuition that the president should release the details of his medical tests and treatment? It is something about the difference between me and the president. The first difference that comes to mind is how much more important the president’s work is. President Trump’s decisions are more influential. As such, perhaps there is a large enough public interest to override claims to privacy.

That explanation does not quite capture my moral intuitions though. I don’t just intuit that President Trump has an obligation of transparency, I also intuit that the President of Malta has an obligation of transparency to the people of Malta. I don’t, however, intuit that Jeff Bezos has an obligation of transparency, even though Jeff Bezos almost certainly has far more power and influence than the President of Malta.

So if President Trump has a special obligation of medical transparency, that must be because of his governmental role. It is not that President Trump is more powerful than I am, but that President Trump can act with the coercive force of law. Put another way, President Trump is actually two persons. He is the private person of Donald Trump and also the public person of the President of the United States. President Trump can act as a private person, such as when he gives his children Christmas presents or writes personal letters to friends. President Trump can also act as a public person, such as when he signs laws or writes letters to foreign dignitaries on behalf of the United States. Indeed, it is precisely my distinguishing these two persons that we can make sense of concepts like political corruption. An act is corrupt when a politician uses their public function for a private purpose.

And indeed, we do think that President Trump retains privacy interests over his personal letters in a way he does not over letters he writes as the head of state. So, perhaps in the private person of Donald Trump deciding to run for the President of the United States, he forfeits certain privacy interests due to the demands of public transparency that a democratic electorate have over the head of the executive. Perhaps because the government acts with the consent of the governed, informed consent implies the public has special rights to know.

This seems like the strongest case for why President Trump, unlike myself, might have special obligations of medical transparency. There is, however, a powerful argument against norms of medical transparency. One of the interests we have in medical confidentiality is that it encourages people to seek out healthcare. If I know my doctor will keep a, potentially damaging, diagnosis secret, then I am more likely to go to the doctor. Presidents are political creatures; they need to factor in public reactions to what they do. Thus, a politically savvy politician might well be unwilling to undergo certain medical tests if they know there is a norm of disclosure. And this might be especially concerning for someone in a position as important as the President of the United States. The medical ethicist George Annas has argued that in general “we should encourage our leaders to seek such help whenever they feel they need it, both for their own sakes and for ours, and protecting their medical privacy is essential if this is to happen.” If presidents were morally required to disclose consultations with a psychiatrist, then presidents will be much less likely to consult them. Having the ‘leader of the free world’ unwilling to consult with medical professionals, however, is a scary place to be. We don’t want a president refusing needed supplemental oxygen just because they fear the consequent political blowback. The best way we know to prevent that, however, is maintain strong norms of medical privacy.

It seems reasonable to think that one might forfeit a deontological right to medical privacy by running for president. But it also seems reasonable to think that there are independent reasons to maintain norms of medical privacy that go beyond merely personal rights.

 

Oh, and a quick postscript for those readers who have never watched The West Wing. It is a great show, and the third season largely deals with questions of medical privacy and the public’s right to know.

The Quandary of Contact-Tracing Tech

image of iphone indicating nearby infections

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


All over the country, states are re-opening their economies. This is happening in defiance of recommendations from experts in infectious disease, which suggest that states only re-open after they have seen a fourteen-day decline in cases, have capacities to contact trace, have sufficient personal protective equipment for healthcare workers, and have sufficient testing capabilities to identify hotspots and deal with problems when they arise.

Experts do not insist that things need to be shut down until the virus disappears. Instead, we need to change our practices; we need to open only when it is safe to do so and we need to employ common sense practices like social distancing, mask-wearing, and hand-washing and sanitizing when we take that step. The ability to identify people who either have or might have coronavirus and to contact those with whom they might have come into contact could play a significant role in this process. Instead of isolating everyone, we could isolate those we have good reason to believe may have become infected.

Different countries have approached this challenge differently. Many have made use of technology to track outbreaks of the virus. Without a doubt, these approaches involve balancing the value of public safety against concerns about personal privacy and undue governmental intrusion into the lives of private citizens.

Many in the West were surprised to hear that Shanghai Disney was scheduled to re-open, which it did on May 11th. Visitors to the park won’t have the Disney experience that they would have had last summer. First, unsurprisingly, Disney is restricting the number of people it will allow into the park at any one time to 24,000 people a day. This is down from its typical 80,000 daily guests. When guests arrive, they must have their temperatures taken, must use hand sanitizer, and must wear masks. Crucially, they must open an app on their phone at the gate that demonstrates to the attendant that their risk level is green.

Since the COVID-19 outbreak, people in China have been required to participate in a system that they call the “Alipay Health Code.” To participate, people download an app on their phones which makes use of geolocation to track the whereabouts of everyone who has it. People are not required to have a COVID-19 test in order to comply with the demands of the app. Instead, the app tracks how close people have come to others who have confirmed cases of the virus. The app assigns a person a QR code depending on their risk level. People with a green designation are low risk and can travel through the country and can go to places like restaurants, shopping malls, and amusement parks with no restrictions. Those with a yellow designation must self-quarantine for nine days. If a person has a red designation, they must enter mandatory government quarantine.

At first glance, this app appears to be a reasonable way of finding balance between preventing the spread of disease on one hand, and opening up the economy and freeing people from isolation on the other. China isn’t simply accepting the inevitable—opening up the economy and disregarding its obligation to vulnerable populations. Instead, it is trying to maximize the well-being of society at large.

Things are more complicated than they might originally appear. First, the process is not transparent to citizens. The standards for reassignment from one color designation to another are not made public. Some people are stuck in mandatory government quarantine without knowing why they are there or how long they might expect to be detained.

There are also concerns about regional discrimination. It appears that a person can be designated a particular threat level simply because they are from or have recently visited a particular region. Citizens have no control over how this process is implemented, and the concern is that decision-making metrics might be discriminatory and might serve to reinforce oppressive social conditions that existed before COVID-19 was an issue. We know that COVID-19 disproportionately affects people living in poverty who are forced to work in unsafe conditions. This kind of tracking may make life for these populations even worse.

There are also significant concerns about the introduction of a heightened degree of governmental surveillance. Before COVID-19 hit, the Chinese government had already slowly begun to implement a social credit system that assigns points to people based on their social behaviors. These points then dictate the quality of services for which the people might be eligible. The Alipay Health Code increases governmental surveillance and encroachment. When people download the Alipay app, the program that is launched includes a command labeled “reportInfoAndLocationToPolice” that sends information about that person to a secure server. It is unclear for what purpose that information will be used in the future. It is also unclear how long it will be mandatory for people in China to have this app on their phones.

But China is not the only country that is using tracking technology to manage the spread of COVID-19. Other countries doing this include South Korea, Singapore, Taiwan, Austria, Poland, the U.K., and the United States. There are advantages and disadvantages to each system. Each system reflects a different balance of important societal values.

South Korea’s system keeps its residents informed of the movement of people who have tested positive for COVID-19. The government sends out texts informing people of places these individuals have been so that others who have also been to those places know whether they might be at risk. This information also lets people know which places might be hotspots so they know to avoid those places. All of this information is useful to prevent the spread of the virus. That said, there are serious challenges here too. Information about the location of individuals at particular times leads to speculation about their behaviors that might lead to discrimination and harassment. The information is anonymous in principle; COVID-19 patients are assigned numbers that are used in reports. In practice, however, it is often fairly easy to deduce who the people are.

Some countries, like the U.K., Singapore, and the United States have “opt-in” tracking programs. Participation in these programs is voluntary and there tend to be regional differences in what they do and how they operate. Singapore uses a system called “TraceTogether.” Users of the app turn on Bluetooth capabilities for their devices. Each device is associated with an anonymous code. Devices communicate with one another and store each other’s anonymous codes. Then, if a person has interacted with someone who later tests positive, they are informed that they are at risk. They can then take action; they may be tested or may self-quarantine. This system appears to have established a comfortable balance between competing interests.

One problem, however, is that its voluntary nature results in low participation numbers—only 1.5 of Singapore’s 5.7 million people are using the app. What follows from this is that a person has the peace of mind of knowing that if they have been in contact with another app user who contracts COVID-19, they’ll know about it. However, this kind of system doesn’t achieve that much-desired balance between concerns for public safety and concerns for a healthy functioning economy. If a person knows only about some, but not all, of the people they’ve encountered who have tested positive for COVID-19, they’re no safer out in the world as a consumer in a newly-opened economy. This app also does nothing to prevent the spread of the virus by asymptomatic people who may never feel the need to get tested because they feel fine.

There are other, less straightforward ways of collecting and using data about the spread of the virus. Government agencies are attaining geo-tracking information from corporations like Google and Facebook. Most users don’t pay much attention when an app asks if it can track the user’s location. People tend to provide a morally meaningless level of consent—they click “okay” without even glancing at terms and conditions. Corporations use this information for all sorts of purposes. For example, police agencies have accessed this information to help them solve crimes through a process of “digital dragnet.” Because these apps track people’s movements, they can help the government to see who was present at sites later identified as hotspots and can identify where people at those sites at the time in question went next. This can help governments direct their attention to where it might do the most good.

Again, in many ways, this seems like a good thing. We don’t want to waste valuable time searching for information where there isn’t any to be found. It’s best instead to find the clues and follow them. On the other hand, this method of attaining information highlights something troubling about trust and privacy in the United States. A Pew poll from November, 2019 suggests that citizens view themselves as having very little control over who is collecting data about them and very little knowledge about what data is being collected or the purposes for which it is being used. Even so, people tend to pay very little attention to the fact that they are being tracked. They simply accept the notion that, if they want to use an app, they have to accept the terms and conditions.

People concerned about personal liberties are front and center on the public stage right now as their protests make for attention-catching headlines. People are unlikely to want to be forced by the government to use a tracking app. Their fears are not entirely unfounded—China’s program seems to open the door for human rights violations and a troubling amount of governmental surveillance of private citizens. Ironically, though, these people give that same information without any fuss to corporations through the use of apps. This may be even worse. At least in principle, governments exist for the good of the people, while the raison d’être of corporations is to make a profit.

The case of tracking poses a genuine moral dilemma. There are very good public health reasons to use technology to track and control the spread of the virus. There are also very good reasons to be concerned about privacy and human rights violations. Around 3,000 people died in the tragic terrorist attacks that took place on September 11th, 2001. As a result, Congress passed The Patriot Act, which significantly limited privacy rights of the people. Its effect on the way respect for individual privacy changed at airports is also noteworthy. How much privacy should we be willing to give up in exchange for safety? If we were willing to give up privacy for safety in response to 911, how much more willing should we be to do so when the death count is so much higher?

When It Comes to Privacy, We Shouldn’t Have to “EARN-IT”

photograph of laptop with a lock with keys on it

At the moment, the subject on everyone’s minds is COVID-19, and for good reason: the number of infected and dying in the United States and around the world is growing every day.  But as Kenneth Boyd has pointed out, there are a number of subjects that are getting ignored. There is a massive locust swarm devastating crops in East Africa. There is an ongoing oil war driving gas prices down and decimating financial markets. And, in the United States, Congress is considering passing a bill that would have significant negative impacts on privacy and free speech on the internet. The bill in question is the EARN-IT Act, and the reason it is not capturing popular attention is obvious: viruses are scary, fast-moving, and make their ways into people’s homes. This bill is complex and understanding its ramifications for people’s rights to privacy and free speech requires a good deal of legal context.

But first, what is the EARN-IT Act? Legislators are clearly not marketing it as an attack on privacy and free speech since such a bill would be widely unpopular. The EARN-IT Act is really intended as a necessary measure to combat the widespread issue of child pornography, or, as the act would admirably rename it, “child sexual abuse materials” on the internet. This is a big problem. Right now, a ring of child abusers using the encrypted messaging app Telegram are being uncovered in South Korea. Proponents of the bill view the encryption that Telegram and other apps like WhatsApp, and soon Facebook, use as a tool for child abusers to evade government detection and prosecution. They see the owners of these apps as neglecting their responsibility to monitor the content going through their servers by encrypting said content so even they cannot see it. Essentially, these companies seem to know that child abuse is a problem on their platforms and instead of putting in the effort to find and report it, they simply blindfold themselves with encryption for their users.

So how will the EARN-IT Act resolve this seemingly willful ignorance and bring child abusers to justice? Well, here is where the issue gets complex and requires legal context. The act itself creates a government committee which would create a recommendation of “best practices” for companies to follow to minimize the spread of child sexual abuse materials on their websites. This recommendation would also be binding. If companies were to fail to follow the recommendations of the committee, they might lose something called “Section 230 immunity” which ordinarily keeps them from being prosecuted when child sexual abuse materials are found on their websites. Right now, if the government finds these materials on a hard drive belonging to you or me, we would go to jail for at least 5 years. But, if those same materials are found on Facebook or Telegram’s hard drives, the site owners will not go to jail, all due to that Section 230 immunity. Understanding why such a difference makes any sense requires understanding the history behind it and the distinction between speakers (the ones who create and share child sexual abuse materials) and distributors (sites like Facebook or Telegram that child abusers may use to share the evidence of their abuse).

In legislation prior to the internet, the legal burden for illegal speech (which, though it sounds weird when you say it, includes images) fell only on publishers and speakers, not distributors. If a book containing illegal content was sold in a bookstore (a “distributor”), the bookstore would not be responsible; only the author (“speaker”) of the content and the publisher would be. Obviously the author would know he broke the law and presumably his publisher should have had the sense to check what they were publishing. But the store that sold the book could not have this responsibility since they might sell thousands upon thousands of titles and could not spend the time checking each one. If the government put that responsibility on the bookstore, bookstore owners might be afraid to sell more titles than they could reasonably read through and check themselves. Fewer ordinary writers would be able to get their works to their audiences. So, many authors would not bother writing, knowing their books would never be sold. While the government would never directly force them to stop speaking, authors would be indirectly silenced. As Supreme Court Justice William Brennan put it in the Court’s unanimous opinion in Smith v. California, the law cannot “have the collateral effect of inhibiting the freedom of expression, by making the individual the more reluctant to exercise it.”

The question, then, is whether Facebook or Telegram should count as distributors or publishers. In 1996, Congress decided the issue with Section 230 of the Communications Decency Act. In this section was the following provision: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Let’s break this down. An “interactive computer service” would be any website or app people share content on (like Facebook or Telegram). The “provider” would be the owner, and “user[s],” would be, of course, anyone who used the site. Any other “information content provider” would be another person sharing information on the “interactive computer service.” If someone (the “information content provider”) posts illegal content on Facebook (the “interactive computer service”), that means that neither the owners of Facebook (the “provider”) nor someone who retweets that content (a “user”) is legally responsible for it. They are not legally responsible because they are not “treated as publisher or speaker” of that content. So, when child sexual abuse materials are shared using Facebook or Telegram’s servers, they have immunity.

The problem is that this immunity does not incentivize sites to find and remove these materials. There is no penalty for allowing them to spread, so long as the site owners never see them. And, with encryption, they can’t see them. One binding “recommendation” the committee created by the EARN-IT Act might make is to require sites to create a “backdoor” to their encryption that would allow the government to bypass it. One of the bill’s main sponsors, Senator Lindsey Graham, has said that he intends for it to end encryption like this for all websites. He has said that “We’re not going to go blind and let this abuse go forward in the name of any other freedom,” in reference to Facebook’s plans to institute end-to-end encryption of their messaging app.

Essentially, if two people on Telegram are texting, it is as though they are both going into a locked house to talk where only they have the keys. These keys would be unique to this particular house. A “backdoor” would be an unlocked entrance to every house that anyone who knew about it could get through and listen to the conversations people are having. There are two serious problems with this proposal: first, the government will be able to see, without any warrant and without checking with the site owners, any information from users; second, since there is no way to guarantee that only the government ever finds out about such a backdoor, it would be possible for anyone who finds the backdoor to access all of your personal information online. Privacy on the internet would quickly disappear.

Now it is important to remember that this recommendation to build backdoors in all websites is not a necessary condition of the EARN-IT Act. Perhaps the principle of it being possible for the government to take away our privacy on the internet is objectionable but it is not necessary that anyone actually lose their free speech. Such an abridgment of our right to privacy would occur only if the committee ultimately decided to include backdoors in its recommendations. One of the bill’s other sponsors, Senator Richard Blumenthal, has said this would not be the case. But, there is not anything stopping the committee from making such a recommendation either, which is where the trouble lies. If the committee acts in good faith, doing what is right, respecting our right to privacy, there will be no problem.

But, of course, politicians and governmental committees do not always act in good faith. The PATRIOT Act was enacted in the wake of 9/11 ostensibly to fight terrorism. As we all know, there was a darker side to this act, including the creation of a number of programs that allowed widespread wiretapping of ordinary citizens, among other violations of people’s rights. None of these harms were actual until they were. “Power corrupts, and absolute power corrupts absolutely” is such a common quotation as to become proverbial. All the committee of the EARN-IT Act has to do to end privacy on the internet is to make a simple recommendation and to threaten companies with the loss of Section 230 immunity. And, since without Section 230 immunity site owners could face serious jail time, sites would either have to manually check every post, every text, every image going through their servers (a virtual impossibility with the scale of internet content sharing) or would have to end encryption as instructed.

The internet is a wonderful and horrible thing, much like the human beings who compose it. The ability to communicate with anyone, around the world is an amazing thing. And to be able to do so privately is even more amazing. But, this amazing technology can, like every technology, be used for both good and for evil. Are we willing to sacrifice our ability to communicate privately online to wholly eliminate child sexual abuse on the internet? What value does privacy really have? The proposal and passage of bills like the EARN-IT Act threaten some of our most fundamental rights, both to speech and to privacy. Like the PATRIOT Act before it, it coats this dangerous abridgement of our rights in a veneer of justice, telling us that the cost to our freedom is worth it to right some wrong. As Graham would have it, we cannot “let this abuse go forward in the name of any other freedom.” But we can, and we must. If privacy is to be a true right, then it cannot be “earned.” The EARN-IT Act would have our right to privacy reduced in this way and so cannot be supported unless the powers of the proposed committee are harshly limited. Our rights are unalienable. The right for the government to limit our rights is not. If one of us, the citizens or the government, needs to “earn” their rights, it is them, not us.

Sensorvault and Ring: Private-Sector Data Collection Meets Law Enforcement

closeup photograph of camera lens

Concerns over personal privacy and security are amplifying as more information surfaces about the operations of Google’s Sensorvault, Amazon’s Ring, and FamilyTreeDNA.

Sensorvault, Google’s enormous database, stands out from the group as a major player in the digital profiling arena. Since at least 2009, it has been amassing data and constructing individual profiles for all of us based on vast information about our location history, hobbies, race, gender, income, religion, net worth, purchase history, and more. Google and other private-sector companies argue that the amassment of digital dossiers facilitates immense improvements in their efficiency and profits. However, the collection of such data also raises thorny ethical concerns about consent and privacy.

With regard to consent, the operation of Sensorvault is morally problematic for three main reasons. First, the minimum age required for managing your own Google account in North America is 13, meaning that Google can begin constructing the digital profiles of children, despite the likelihood that they are unable to comprehend the Terms and Service agreement or its implications. Their digital files are thus created prior to the (legal) possibility of providing meaningful consent.

Second, the dominance of Google’s Search Engine, Maps, and other services are making it increasingly less feasible to live a Google-free life. In the absence of a meaningful exit option, the value of supposed consent is significantly diminished. Third, as law professor Daniel Solove puts it, “Life today is fueled by information, and it is virtually impossible to live as an Information Age ghost, leaving no trail or residue.” Even if you avoid using all Google services, your digital profile can and will still be constructed from other data point references about your life, such as income level or spending habits.

The operation of Sensorvault and similar databases also raise moral concerns about individual privacy. Materially speaking, the content in Sensorvault puts individuals at extreme risks of fraud, identity theft, public embarrassment, and reputation damage, given the detailed psychological profiles and life-patterns contained in the database. Google’s insistence that protective safeguards are in place is not particularly persuasive either in light of recent security breaches, such as Social Security numbers and health information of military personnel and their families being stolen from a United States Army Base.

More abstractly, these data collection agencies represent an existential threat to our private selves. Solove argues in his book “The Digital Person” that the digital dossiers amassed by private corporations are eerily reflective of the files that Big Brother has on its citizens in 1984. He also makes a comparison between the secrecy surrounding these profiles and The Trial, in which Kafka warns of the dangers of losing control over personal information and enabling bureaucracies to make decisions about our lives without us being aware.

The stakes are growing increasingly high as Google, Amazon, and FamilyTreeDNA move beyond using data collection for their own purposes and are now collaborating with law enforcement agencies. These private companies attempt to justify their practices on the grounds that they are a boon to policing practices and are effectively helping to solve and deter crime. However, even if you are sympathetic to their justification, there are still significant ethical and legal reasons to be concerned by the growing relationship between data collecting private-sector companies and law enforcement agencies.

In Google’s case, the data in Sensorvault is being shared with the government as part of a new policing mechanism. American law enforcement agencies have recently started issuing “Geofence warrants” which grant them access to the digital trails and location patterns left by individuals’ devices in a specific time and area, or “geofence.” Geofencing warrants differ significantly from traditional warrants because they permit law enforcement to obtain access to Google user’s data without probable cause. According to one Google employee, “the company responds to a single warrant with location information on dozens or hundreds of devices,” thus ensnaring innocent people in a digital dragnet. As such, Geofencing warrants raise significant moral and legal concerns in that they circumvent the 4th Amendment’s protection of privacy and probable cause search requirement.

Amazon’s Ring (a home surveillance system) is also engaged in morally problematic relations with law enforcement. They have partnered with hundreds of departments in the US to provide police with data from their customers’ home security systems. Reports suggest that Ring has shared the locations of their customers’ homes with law enforcement, is working on enabling police to automatically activate Ring cameras in an area where a crime has been committed, and that Amazon is even coaching police on how to gain access to user’s cameras without a warrant.

FamilyTreeDNA, one of the country’s largest genetic testing companies, is also putting consumers’ privacy and security at risk by providing its data to the FBI. FamilyTree has offered DNA testing for nearly two decades, but in 2018, it willingly granted law enforcement access to millions of consumer profiles, many of which were collected before users were aware of the company’s collaboration with law enforcement. While police have long been using public genealogy databases to solve crime, FamilyTree’s partnership with the FBI marks one of the first times a private-sector database has willingly shared the sensitive information of its consumers with governmental agencies.

Several strategies might be pursued to mitigate the concerns raised by these companies regarding consent, privacy, and law enforcement collaboration. First, the US ought to consider adopting safeguards similar to the EU’s General Data Protection Regulations which, for example, sets the minimum age of consent for Google Users at 16 and stipulates that Terms of Service “should be provided in an intelligible and easily accessible form, using clear and plain language and it should not contain unfair terms.” Second, all digital and DNA data collecting companies should undergo strict security testing to protect against theft, fraud, and the exposure of personal information. Third, given the extremely private and sensitive nature of such data, regulations ought to be enacted to prevent private companies like Family Tree from sharing profiles they amassed before publicly disclosing their partnership with law enforcement. Fourth, the US Congress Committee on Energy and Commerce should continue to monitor and inquire into companies as they did in their 2019 letter to Google. There needs to be greater transparency regarding what data is being stored and for what purposes. Finally, the 4th Amendment must become a part of the mainstream conversation regarding the amassing of digital dossiers, DNA profiles, and the access to such data by law enforcement agencies without probable cause.

Forget PINs, Forget Passwords

photograph of two army personnel using biometric scanner

By 2022, passwords and PINs will be a thing of the past. Replacing these prevailing safety measures is behavioral biometrics – a new and promising generation of digital security. By monitoring and recording the pattern of human activity such as finger pressure, the angle at which you hold your device, hand-eye coordination and other hand movements, this technology creates your digital profile to prevent imposters from accessing your secure information. Behavioral biometrics does not focus on the outcome of your digital activity but rather the manner in which you enter data or conduct a specific activity, which is then compared to your profile on record to verify your identity. Largely used by banks at present, research sites predict that by 2023, there will be 2.6 billion biometric payment users.

Biometric systems necessitate and operate based on a direct and thorough relationship between a user and technology. Consequently, privacy is one of the main concerns raised by critics of biometric systems. Functioning as a digitized reserve of detailed personal information, the possibility of biometric systems being used by unauthorized parties to access stored data is a legitimate fear for many. Depending on how extensive the use of biometric technology becomes, an individual’s biometric profile could be stolen and used against them to gain access to all aspects of their life. Adding to this worry is the potential misuse of an individual’s personal information by biometric facilities. Any inessential use of private information without the individual’s knowledge is intuitively unethical and considered an invasion of privacy, yet the US currently has no law in place requiring apps that record and use biometric data to disclose this form of data collection. If behavioral biometrics is already being used to covertly record and compile user activity, who’s to say how extensive and intrusive unregulated biometric technology will become over time?

Another issue with biometric applications is the possibility of bias towards minorities, given the prominence of research that suggests certain races are more likely to be recognized by face recognition software. A series of extensive independent assessments of face recognition systems conducted by the National Institute of Standards and Technology in 2000, 2002 and 2006 showed that males and older people are more accurately identified than females and younger people. Therefore, algorithms could be designed without accounting for the possibility of unintended biases, which would make these systems unethical.

By the same token, people with disabilities may face obstacles when enrolling in biometric databases if they lack physical characteristics used to register oneself in the system. An ethical biometric system must cater to the needs of all people and allow differently abled and marginalized people fair opportunities to enroll in biometric databases. Similarly, a lack of standardization of biometric systems that can cater to geographic differences could lead to compromised efficiency of biometric applications. Because of this, users could face discrimination and unnecessary obstacles in the authentication process.

Behavioral biometrics is gaining traction as the optimum form of cybersecurity designed to prevent fraud via identity theft and automated threats, yet the social cost of incorporating technology as invasive and meticulous as this has not been fully explored. The social and ethical consequences the use of behavioral biometrics may have on individuals and society at large deserves significant consideration. It is therefore imperative that developers and utilizers of biometric systems keep in mind the socio-cultural and legal contexts of this type of technology and compare the benefits of depending on behavioral biometrics for securing personal information against its costs. Failure to do so can not only hinder the success of behavioral biometrics, but can also leave us unequipped to tackle its possible repercussions.

La Liga, EULAs, and Privacy in Public Spaces

photograph of televised soccer game

It was recently reported that Spanish soccer league La Liga took advantage of technology from users’ phones to detect bars that were streaming their games without a license. La Liga has now been fined $280,000 for disrespecting their clients privacy; they used the microphone as well as the phones’ GPS trackers to eavesdrop on the sound coming from users’ phones. Then, using sound detection tech similar to Shazam, they could identify locations where the game was being watched and check whether that location was a commercial establishment that had not paid to televise it.  

10 million downloads of La Liga’s app made for a vast amount of data. A Spanish court ruled that La Liga’s terms and conditions didn’t clearly articulate the possible use it would make of users’ phones and therefore fined the league and ruled that the app must be taken down by June 30th. 

Depending on the jurisdiction, there is some question regarding how binding EULA can be. Some lawyers cite British common law for precedent in the UK to suggest that contracts must in principle be negotiable: “End user license agreements – the rules that govern the use of software and even hardware which, overwhelmingly, has already been bought and paid for – violate that legal principle.” Contracts that lack this quality extend beyond EULAs, however. For example, parking validation tickets and signs in businesses attempting to limit the liability of management do not have room for negotiation. Instead, quick and non-negotiable contracts, such as those limiting liability of business for damage done to your car in a parking garage, are called “adhesion contracts”. Standards of reasonableness are often applied in circumstances where customers engage with businesses that attach these conditions to service. As Dan Ralls of “Ask a Lawyer” explains

“Courts will refuse to uphold adhesion contracts that include unconscionable or unreasonable terms—you can’t have anything too crazy forced on you. They also have to be conspicuous when entered into—some courts have invalidated tiny adhesion contracts on the backs of parking tickets, though others have enforced them.”

Though legal action based on end-user license agreements, or EULAs, is rare, in 2015, some precedent was set when testers for an Xboxlive game Gears of War leaked information about the game. Because this behavior violated Microsoft’s EULA, the leakers were banned from using Xbox both on- and offline.  

La Liga claims that their motive in coordinating audio and GPS data from their uses was to “protect clubs and their fans from fraud.” It purports to only attend to the relevant “sonic fingerprint” of the game audio and ignore more sensitive or private information its users’ microphones pick up.  

A central doctrine in how we imagine privacy from a legal perspective is the distinction of zones where there isn’t a presumption of privacy. When people expect that their activities or possessions will remain private, a greater burden of justification must be met for violating their privacy, by monitoring or collecting information on their activities or interfering with their property. In the US, this value is articulated in the Fourth Amendment. 

La Liga’s app was deemed unreasonable in its use of data from their users and did not make it clear through their EULA that this was a function the app would perform. As smartphones and similar technology become more prevalent, it will be interesting to note whether the use of data from microphones in public spaces remains out of bounds.

Privacy and a Year in the Life of Facebook

Photograph of Mark Zuckerberg standing with a microphone

Mark Zuckerberg, the CEO of Facebook, declared on January 4 that he would “fix Facebook” in 2018. Since then, the year has contained scandal after scandal. Throughout the year, Facebook has provided a case study of questions regarding how to protect or value information privacy. On March 17, the New York Times and The Guardian revealed that Cambridge Analytica used information gleaned from Facebook users to attempt to influence voters’ behavior. Zuckerberg had to testify before Congress and rolled out new data privacy practices. In April, the Cambridge Analytica scandal was revealed to be more far-reaching than previously thought and in June it was revealed that Facebook shared data with other companies such as Apple, Microsoft and Samsung. The UK fined Facebook the legal maximum for illegal handling of user data related to Cambridge Analytica. In September, a hack accessed 30 million users data. In November, another New York Times investigation revealed that Facebook had failed to be sufficiently forthcoming about Russia’s interference on the site regarding political manipulation, and on December 18 more documents came out showing that Facebook offered user data, even from private messages, to companies including Microsoft, Netflix, Spotify and Amazon.

The repeated use of data regarding users of Facebook without their knowledge or consent, often to manipulate their future behavior as consumers or voters, has led to Facebook’s financial decline and loss of public trust. The right to make your own decisions regarding access to information about your life is called informational privacy. We can articulate the tension in discussions over the value of privacy as between the purported right to be left alone, on the one hand, and the supposed right of society to know about its members on the other. The rapid increase in technology that can collect and disseminate information about individuals raises the question of whether the value of privacy should shift along with this shift in actual privacy practices or whether greater efforts need to be devoted to protect the informational privacy of members of society.

The increase in access to personal information is just one impact of the rise of information technology. Technological advances have also affected the meaning of personal information. For instance, it has become easier to track your physical whereabouts given the sorts of apps and social media that are commonly used, but also the reason that the data from Facebook is so useful is that so much can be extrapolated about a person based on seemingly unrelated behaviors, changing what sorts of information may be considered sensitive. Cambridge Analytica was able to use Facebook data to attempt to sway voting behavior because of trends in activity on the social media site and political behavior. Advertising companies can take advantage of the data to better target consumers.

When ethicists and policy makers began discussing the right to privacy, considerations centered on large and personal life choices and protecting public figures from journalists. The aspects of our lives that we would typically consider most central to the value of privacy would be aspects of our health, say, our religious and political beliefs, and other aspects of life deemed personal such as romantic and sexual practices and financial situations. The rise of data analysis that comes with social media renders a great deal of our behaviors potentially revelatory: what pictures we post, what posts we like, how frequently we use particular language, etc. can be suggestive of a variety of further aspects of our life and behaviors.

If information regarding our behavior on platforms such as Facebook is revealing of the more traditionally conceived private domain of our lives, should this information be protected? Or should we reconceive of what we conceive of as private? One suggestion has been to acknowledge the brute economic fact of the rise of these technologies: this data is worth money. Therefore, it could be possible to abstract away from the moral value or right to privacy and focus instead on the reality that data is worth something, but if the individual owns the data about themselves they perhaps are owed the profits of the use of their data.

There are moral reasons to protect personal data. If others have unrestricted access to their whereabouts, health information, passwords protecting financial accounts, etc., they could be used to harm the individual. Security and a right to privacy thus could be justified as harm prevention. It also could be justified via right to autonomy, as data about one’s life can be used to unduly influence her choices. This is exacerbated by the ways that data changes relevance and import depending on the sphere in which it is used. For instance, revealing data regarding your health being used in your healthcare dealings has different significance than if potential employers had access to such data. If individuals are in less control over their personal data, this can lead to discrimination and disadvantages.

Thus there are both economic or property considerations as well as moral considerations for protecting personal data. Zuckerberg has failed to “fix” Facebook in 2018, but more transparency of the protections and regulation of how platforms can use data would be positive moves forward for respecting our value of privacy in 2019.

Getting Personal About Personal Genetic Information

Photograph of two boxes by the brand 23AndMe

Learning about the ins and outs of what makes you, you has become a trend in recent years due primarily to the popularization of genetic testing companies such as 23andMe, AncestryDNA, or GEDmatch. All three companies may have stickier corporate policies than what you might expect from a harmless saliva collection kit. In fact, in recent months, story after story has surfaced regarding the largely nonexistent privacy protections on personal genetic information. At the end of April 2018, authorities were able to identify and eventually prosecute the ‘Golden State Killer’ suspect using genetic information, which was acquired through a genealogy site called GEDmatch.

This site, as explained by The Atlantic, is a website where individuals can upload their genetic information in the hopes of finding unknown relatives through DNA commonalities. However, authorities utilized this site to create a fake profile and uploaded DNA found at a crime scene, where it was soon matched to a distant relative of the man eventually identified as the killer. As you can imagine, this created a widespread privacy concern for not just GEDmatch users but consumers of other genetic testing databases, and provoked questions about whether the greater common good was morally permissible over breaching individual privacy. It was revealed through the Freedom of Information Act that the Federal Trade Commission is investigating DNA testing companies like 23andMe and Ancestory.com over their policies for handling personal infomation and genetic data and how they share that information with third parties.

Not only has private genetic information been exploited to solve multiple murder cases, but in 2017 NBC warned consumers of the potential risks of giving companies access to their complete genetic codes. As Peter Pitts, who is part of  a medical advocacy group, stated,  genetic code “is the most valuable thing you own”. Although the majority of legitimate companies ensure customers that they do not share this information with researchers or third parties, media outlets including NBC are encouraging people to read the fine print of these broad contracts that have to be signed before personal samples are submitted for analysis. In fact, even though many of these companies market themselves as purely targeting genealogy, there is still critical information about your health illustrated in your genetic code, which in the wrong hands could be devastating to personal privacy.

Even more terrifying is the concealed nature of genetic information. For example, in comparison to your credit card information, where you can eventually see purchases which cannot be attributed to your own spending, you may never find out if a third party has your personal genetic information. Beyond having something interesting to discuss over the Thanksgiving table, many individuals use DNA testing in order to contribute to future medical advances. However, as Marcy Darnovsky of The New York Times suggests, “there are more efficient ways of contributing to medical advances than paying to hand over your genetic health information to companies like 23andMe.” In late 2015, 23andMe announced two deals with some of the largest pharmaceutical and biotech corporations in the industry in order to find treatments for diseases hidden in our DNA. Concerns arise after reading through 23andMe’s consent document, which acknowledges the fact that once you send off your genetic information there are no guarantees of anonymity. In fact, breaches in confidentiality could affect more than just you — they could impact your family members as well, since you share a similar genetic code. Darnovsky explains that “a 2008 law prohibits health insurance companies and employers from discrimination based on genetic information, but the law does not cover disability, life, or long-term care insurance.”  

Another noteworthy negative impact of this information being provided is that the general public may not be able to decipher wordy scientific information. How are they going to deal with potentially devastating news about their own or their children’s future health, in terms of genetic risk for getting certain diseases or their carrier status? A quick look at the 23andMe website shows that anyone can get their health information regarding genetic probability for certain illnesses. 23andMe actually states “Genetic Health Risk reports – learn how your genetics can influence your risk for certain diseases”. Even though they do mention that having positive for a certain gene does not necessarily mean one will get the disease, a naïve or uninformed individual could take this information to mean that they are certainly getting this illness. In this new era of simplifying genetic information so the general public can “learn more about themselves,” it is imperative that we not only advertise companies that can make this possible, but also make clear the risks associated with such lenient confidentiality contracts. A breach of your genetic information means anyone in a pharmaceutical company laboratory not only knows what color eyes you have, but they know exactly what diseases you have a probability of getting. Careful evaluation is therefore critical in determining whether learning more about oneself through genetic testing, is worth the risk produced due not only to many companies negligence of personal privacy, but their nonspecific privacy guarantees which could easily be exploited by third parties.

The takeaway for any lay person not familiar with the ins and outs of genetic information specifically how to interpret it,  is that they should be especially cautious of these geneology tests. Consumers should take care to read the fine print which describes the company  privacy policies and also recognize these genetic testing companies as businesses who will protect their own interests, whether or not they are favorable or not to their consumers.

 

Insider Talk: Challenging Food Choices

Photograph of a table set for six people

When a company wants to go green, are there limits on what it can ask of its employees? This question came to the fore due to WeWork’s recent announcement: the company will no longer serve or reimburse for meat, citing the environmental costs of animal protein and, to a lesser extent, worries about animal welfare. The reaction was swift and negative: it’s just virtue signaling, it’s an ideological crusade, it’s tribalism, it’s bull. The North American Meat Institute, a lobbying group for the industry — has even launched IChooseMeat.com, a response to the threat of “your office dictating your food choices,” and which aims to “fight meat denial.”

Here, though, I don’t want to get lost in the criticisms of WeWork’s policy, both because they seem like overreactions, and because they seem misguided in an era that expects moral leadership in business. They are overreactions because such policies don’t force anyone to do anything. You want to eat meat? Go for it. Just don’t expect your company to subsidize it. Was it any worse for companies to remove cigarette machines from their offices in the 80s, when smoking was still commonplace? And they are misguided because this sort of disagreement is the price of something that’s genuinely good: namely, having companies care about more than profits. We have long wanted businesses to be more socially conscious, but of course we disagree about what being “socially conscious” involves. These conflicts aren’t bugs in the new order: they’re features, and ones to which we should acclimate ourselves.

So let’s set those issues aside. Instead, let’s focus on the general puzzle here. Why do we bristle when people challenge our meat consumption? And is our bristling justified?

There are, of course, those who don’t like the challenge because they’re climate change skeptics, or they don’t think it matters at all whether animals suffer, or what have you. But if one of those factors explains the negative reaction, then the disagreement is probably too deep to resolve, and we should simply move on.

There are a slew of other uninteresting possibilities. For instance, we don’t like being made to feel guilty about food. (But who likes it in other contexts?) And you sometimes hear people say that change is hard. (Not much defense: we can always play that card.) Ultimately, though, I think we need a more interpersonal story. We don’t seem to think that people have the right to criticize what we eat. And why would that be? What norm are they violating?

A few possibilities come to mind. The first is that this is somehow a violation of privacy. But if that’s what’s bothering us, it won’t go far as a justification. It’s one thing to claim that a matter is private when it has no public consequences. But our diets do, and so they seem subject to public scrutiny.

A second option is a “local knowledge” objection. Maybe no one knows a person’s situation well enough to decide what he or she ought to eat. Only you know whether you need some chicken to flourish, or if you can make it just fine on garbanzo beans. But again, this seems implausible as a defense. I don’t know much at all about what my body needs; I just know what makes me feel good. And feeling good is as much about habit and history as it is about biology: I feel a certain way in response to whether I’m getting what I want (cake), not whether I’m fueling in the optimal way (spinach and lentils).

A third story is that we’re not open to moralizing about food, as we care too much about it. This is a bit like the way that having children is awful for the environment, but we don’t stop having them for that reason. The environment matters to us, but not that much. However, the parallel isn’t great. The impulse to have children runs deep, and for many people, their kids make their lives meaningful. Of course, food is also tied to living meaningfully: table fellowship is among life’s basic pleasures, and can forge deep bonds. However, you can savor time with family without eating turkey. This requires flexibility, but not the rejection of one of our deepest longings.

A final possibility — and the one I find most plausible — is that food talk is insider talk. Debates about what we eat, like debates about sex and child rearing, are ones we have with those who aren’t in our tribe — with non-Christians or non-liberals or non-crunchy moms — but we generally don’t change our minds as a result. By contrast, if a fellow liberal expresses worries about prostitution, or if your pastor gives you an argument against spanking your kids, you might well see things differently. You trust insiders to see the world in roughly the way you do, and as a result, their reasoning gets extra weight in your own deliberations.

If this is what’s going on, it’s both understandable and unfortunate. The former, because ethics is hard, disagreement is everywhere, and we need some strategy for deciding how to allocate our limited time and attention. After all, moral conversation isn’t the whole of life; at some point, you have to do the dishes and the laundry.

It’s unfortunate, though, because of what it implies about the way people insulate ourselves from moral criticism. There are things for which it’s worth circling the wagons. But food? In his Meditations, Marcus Aurelius observed that “all through our lives when things lay claim to our trust,” we should strive to see them clearly, “stripping away the legend that encrusts them.” Food is full of legends, but it’s ultimately just sustenance. It’s a mean to many ends — nutritionally, socially, politically — though ones that can usually be achieved in other ways. It isn’t sacrosanct, and change, though difficult, is possible.

So should everyone become a strict vegetarian? Maybe, maybe not. But the conversation is worth having.

Spilled Blood in the Bloodline: The Ethics of Using Genealogy to Catch Criminals

On April 24th 2018, authorities arrested 72- year-old Joseph James DeAngelo.  Investigators had compelling evidence to suggest that DeAngelo committed at least 12 murders, 50 rapes, and over 100 burglaries throughout California in the 70s and 80s, earning him the monikers “The East Area Rapist” and “The Golden State Killer.” DeAngelo might have lived out his life without being caught were it not for the existence of a genealogy website.

Continue reading “Spilled Blood in the Bloodline: The Ethics of Using Genealogy to Catch Criminals”

Bathrooms and the Board of Trustees: The Ethics of DePauw’s Restroom Protests

An image of three bathroom stalls, with one stall door open.

In a recent newspaper article for DePauw University’s student newspaper, Madison Dudley interviews five DePauw seniors about their decision to begin a petition. This petition implores certain members of DePauw’s Board of Trustees to end their support of politicians who “support laws that can be interpreted as regulating women’s bodies, fail to protect DACA students, and support the recent Republican tax plan.” The petition campaign was accompanied by posters hung in women’s bathrooms in every stall of every academic building on campus. Each poster pictures a conservative politician’s face, with information about the petition and the expression “He might as well be watching you pee.”

Continue reading “Bathrooms and the Board of Trustees: The Ethics of DePauw’s Restroom Protests”

Diagnosis from a Distance: The Ethics of the Goldwater Rule

The September/October 1964 issue of Fact magazine was dedicated to the then Republican nominee for president, Barry Goldwater, and his fitness for office. One of the founders of Fact, Ralph Ginzburg, had sent out a survey to over 12,000 psychiatrists asking a single question: “Do you believe Barry Goldwater is psychologically fit to serve as President of the United States?” Only about 2,400 responses were received, and about half of the responses indicated that Goldwater was not psychologically fit to be president. The headline of that issue of Fact read: “1,189 Psychiatrists Say Goldwater is Psychologically Unfit to be President!”

Continue reading “Diagnosis from a Distance: The Ethics of the Goldwater Rule”

Doxxing for Social Justice

In 2015, after Lindsey Graham said that Donald Trump should “stop being a jackass,” Trump read Graham’s personal cell phone number aloud to a crowd at one of his campaign rallies and urged people to call the number. Journalists who dialed the number were directed to an automated voicemail account reporting “Lindsey Graham is not available.” His voicemail inbox was, unsurprisingly, full.

Continue reading “Doxxing for Social Justice”