← Return to search results
Back to Prindle Institute

Smart Mouthguards and the Problem of Choice

photograph of football player with mouthguard out

Anyone who has played a contact sport like rugby or American football will tell you that it is tough. The physicality of such games — from the speeds at which players must move to the colossal collisions they endure when tackled and tackling — is extreme, and with any sport involving such physical demands comes the risk of injury. Indeed, it is not unheard of for rugby players to experience dislocations, fractures, and, in some of the worst cases, paralysis or even death as a result of game or training activities. It is no surprise that the governing bodies of such sports (like the NFL or World Rugby) are constantly considering methods to reduce the risk to players.

Their motivations can be viewed from several angles. For the compassionate and optimistic amongst you, such bodies are taking an active interest in the well-being of their players. They recognize that these athletes give their all, and the governing bodies want to ensure that the players are as healthy and play as safely as possible because the players are people, and the organizing bodies genuinely care about them. The colder, pessimistic amongst you might think that the bodies try to make changes not out of care for the players but for themselves. The safer the sport is, the less likely governing bodies are to be subject to financial claims by injured players. Also, it costs a lot to train players to the point they can play professionally, and safeguarding players’ health brings a safeguarding of significant financial investment.

For my two cents, I suspect there’s a mixture of both. These organizing bodies do care about their players and don’t want them to get hurt, but they also recognize that it is in their financial interest to do what they can (to a degree) to make play as safe as possible.

The methods by which play can be made safer take different forms. One of the most common is changes to the rules. The idea is that by banning dangerous strategies and behaviors, players will be less likely to get injured by reckless collisions. For example, in 2020, World Rugby revised the rules around high tackles (tackles that directly impact the tackled player’s head) to reduce trauma to the neck and brain. While the creation, modification, and enforcement of game rules can help prevent harm, they don’t necessarily help detect or respond to injuries when they occur; especially if match officials don’t see an injury. After all, pitches can be chaotic, and referees only have one pair of eyes. It can sometimes be incredibly difficult to identify, in the rush of play, when someone might have been injured, especially if that player hasn’t noticed themselves.

To that end, this year, World Rugby mandated the use of smart mouthguards in all professional training and games. Unlike traditional mouthguards that only protect players’ teeth, smart mouthguards come with embedded sensors that track the forces and events players’ heads experience during play. This provides a new avenue for understanding and potentially preventing injuries like concussions, which can have devastating long-term health impacts.

Before getting into the weeds of the potential ethical issues that such a form of tracking brings, however, I want to be clear that I am overwhelmingly in favor of this technology. The damage that can be caused during high-contact sports can be terrible, and effective methods of reducing and redressing injury should be welcomed. That being said, though, smart mouthguards come with some ethics concerns, specifically around the nature of the data it captures, that cannot be overlooked.

Now, we can’t jump into all the issues here; there are far too many. So, I’m going to focus on what I think is the key one: player choice. (If you want to read about some more, you can go to this blog post that a co-author and I wrote, or, if you really want to get into it, this article).

As mentioned, this year, World Rugby mandated that all elite players use smart mouthguards when playing in games or when training. If they refuse and do not have a medically justifiable reason for doing so, they are subject to the “recognize and remove” policy. In short, that policy states that if a player gets hit in the head, and there is any suspicion that such a hit might have caused a concussion, that player will have to sit the rest of that game out. This means that they cannot go to team medics to be checked out and assessed and, potentially, come back onto the field if they are given an all-clear, which is what normally happens. For those who refuse to wear a smart mouthguard, that’s just not an option.

So, on the face of it, players do, technically speaking, have a choice when it comes to smart mouthguards and, thus, having their data tracked. They can either wear the mouthguards and have details about their personal health collected and stored (potentially indefinitely), or, if they choose, they can go without.

But this is an oversimplification. Players must sacrifice an incredible amount to get to the level of a professional sports athlete. Time, money, energy, relocating to chase contracts, not to mention all the health risks (and inevitable injuries and pain) that come with playing sports – all before you even get to a professional level. And while most would say that they couldn’t see themselves doing anything else, this just adds to their work pressure. Investing so much into your dream job means that anything that might jeopardize your ability to play is likely to be seen as a danger, one that should, if possible, be avoided. Here, the risk of coercion might emerge, and thus a compromise of autonomy and player choice.

If you are at a greater risk of being removed from a game, be that rugby in this instance, because you refuse to wear a smart mouthguard and thus are subject to the recognize and remove policy, then you are a less attractive prospect for your manager. After all, why would they pick you for the team when they could instead go with a player who is compliant and doesn’t run the risk of being removed under the mere suspicion of an injury? Given that it’s your dream job and everything you would have sacrificed to have a shot at playing professionally, you are likely to simply go with the flow. So, there is a huge degree of pressure on players to wear these mouthguards simply to show that they are a team player who won’t put at risk their team’s chances on the pitch.

Additionally, this pressure will come at players from different angles depending on their level of security within the team. Star players may feel the pressure to conform from above as management wants to minimize the chance of such valuable players being removed from the field of play. Less experienced players, whose place on a team has only just been secured and is, thus, tenuous, might feel the pressure to conform simply so that management doesn’t replace them with someone else.

What both result in is pressure to use a technology which records intimate health data that those players weren’t having recorded before. And, as is so often the case when it comes to health decisions and biometric data collection, it is paramount that we protect and promote the freedom of choice that individuals have, especially those in vulnerable situations.

Now, as I said earlier, I’m not against the usage of this technology to help prevent harm to players. But what I think is essential is that we openly recognize that asking players to wear these devices places pressure on them to have their health data collected and monitored in a way which hasn’t been done before. This is something that they should be able to make a decision about free from pressures that might coerce or influence their decision-making. If we don’t do that — if we assume that players will invariably be fine with such data collection, or worse, force them into it regardless of how they might feel — we risk not only making the sport less desirable for those who may have a real passion for it but also infringe upon some pretty fundamental freedoms that, in other situations, we might feel very uncomfortable about.

Ultimately, how would you feel if your boss or teacher said if you wanted to keep coming to work or class, they wanted access to your biological data? It is this scenario we risk if we fail to consider the smart mouthguard question.

Personalized Pricing: All the Rage

Who could have predicted that AI would return us to a land before price tags?

Last month, the Federal Trade Commission ordered eight companies (like Mastercard and JPMorgan Chase) to provide information regarding their “surveillance pricing” practices – that is, charging variable rates that shift according to the information gleaned from a client’s digital footprint. The FTC means to expose the “hidden ecosystem” of data brokers and middlemen who monitor user data, compile consumer profiles, and sell that information up the food chain. As more and more of those sneaky details and shady deals come to light, public animosity grows. Three-quarters of Americans object to online retailers charging different prices for the same product, and two-thirds mistakenly believe that the practice is illegal.

But we’ve been part of the personalized pricing experiment for a while now. (When’s the last time you paid sticker price on the car lot?) In 1996, Victoria’s Secret was already sending catalogs with cheaper prices to men. Amazon was first caught using the tactic in 2000 showing different people different prices for DVDs. Staples and Home Depot sell items and at different prices according to customers’ geographical location. Travel fare aggregator Orbitz directs Mac users to pricier lodgings. We also know that airline fares rise as you repeatedly search the same dates in your browser.

Further, mere dynamic pricing – rates that reflect the ebb and flow of supply and demand – has long been a staple of the workweek from early-bird specials to happy hour drinks. Hotel rooms cost more as the night progresses; airline fares rise as the date approaches. We seem to accept, daily, that some will pay more and some will pay less for the exact same good or service, so why the outrage now? Where does our sense of injustice come from?

So far, much of the chatter has focused on the digital privacy piece. We dislike the thought of someone sifting through our internet trash in order to paint our consumer profile. Worse yet, we feel violated by having our browser history used against us as a tool for coercion to buy that thing or book that trip. Despite regular warnings, we refuse to accept that we’ve consented to being spied on  – an online presence has become a necessity of life, and it often feels that there is no reasonable exit option. The cost for remaining in the digital space can’t be resigning ourselves to constant monitoring and manipulation. The answer is not abstention, it’s legislation.

These are legitimate gripes. But lost in this discussion is any articulation of the precise problem we have with price discrimination – that is, the selling of identical products to different people for different amounts. If businesses catered to customers’ unique price points without this kind of data scraping would we still have reason to object? Public opinion suggests we would.

Consider Wendy’s, the most recent recipient of consumers’ wrath. Customers revolted when it was suggested that a Baconator would cost more during a demand surge than in off-hours. Lydia DePillis, writing in The New York Times, identified Wendy’s failure as a mistake of marketing: they weren’t upcharging rush-hour customers, they were incentivizing the off-hour passerby. This isn’t a new and nefarious way to overcharge customers, it’s just a means of attracting reluctant holdouts. Everyone has their price. It all comes down to whether people see the lower price as a discount or the higher price as a tax. Everyone loves a bargain.

Ultimately, the trick in rolling out these new automated pricing models lies in accentuating the positive: Catch that break. Grab that deal. Let your self-denial give way to a willingness to pay. We’ll unite buyers and sellers, desire with satisfaction. Let the bidding commence. Welcome to the flea market of the future.

So perhaps public opinion is mistaken. Perhaps our envy and fear of missing out has simply invented something new to be mad at. Jean-Pierre Dubé, an economist at the University of Chicago, offers a telling comparison:

If I literally tell you, the price of a six-pack is $1.99, and then I tell someone else the price of a six-pack for them is $3.99, this would be deemed very unfair if there was too much transparency on it. But if instead I say, the price of a six-pack is $3.99 for everyone, and that’s fair. But then I give you a coupon for $2 off but I don’t give the coupon to the other person, somehow that’s not as unfair as if I just targeted a different price.

Groceries stores and airlines regularly avoid the Wendy’s reaction by masking price differences in memberships and coupons. But our sense of injustice fails to flare in these cases. Doesn’t this mean we’re being inconsistent? Credit scores are used daily to determine different financing rates (i.e., the higher or lower cost consumers will end up paying for an item). What makes personalized pricing so different?

Transparency. Consumers are routinely made aware of the perks membership offers. They are also aware of the (undemanding) steps required for them to transition from the out-group to the in-group. That kind of simplicity and stability is priceless. You can’t make a plan if you don’t know what dinner will cost you when you leave the house. But personalized pricing relies on obfuscation – businesses remove all reference points so that consumers can never find their footing. There is no “market” price.

Our post-pandemic world makes for a compelling test case. Today it’s impossible to anticipate what you’ll be expected to pay for everyday goods. Does anyone know what paper towels should cost? Whatever the buyer will bear.

Asymmetric Gain. Despite the promise of daily deals, the benefits of personalized pricing flow in one direction. As Lee Hepner, legal counsel for American Economics Liberties Project, explains, “personalized pricing is a transfer of wealth from consumer to the seller. Writ large, the goal and endgame is to maximize revenue.” By deducing each buyer’s specific pain point, sellers can extract the utmost value. They stand to reap all the gains that come with eliminating the gap between what consumers are willing to pay and what they actually pay. And there isn’t any social good created by charging different customers different prices (unlike the case of senior citizen discounts or letting kids eat free). This is pure profit maximization.

In practice, then, personalized pricing looks awfully similar to price-gouging – consideration of your unique circumstances (when you get paid, the date of your friend’s wedding, your expensive taste) generates inflated price tags. As customers’ need and ability to pay increase, so does the cost.

Consumer Impotence. Perhaps most damning, then, is what personalized pricing does to the already skewed power balance between customers and businesses. Rather than being price takers, sellers now become price setters. Armed with their marks’ financial details and search history, they can ensure every sale returns top dollar.

Buyers, meanwhile, find themselves in the dark, siloed from the experience of other customers, the price history of particular goods, and the unique deals of alternative vendors. But you can’t vote with your dollar if you don’t know your options. And without a shared experience of the marketplace, concerted action is all but impossible. Our ire may be warranted, but we may soon lack any ability to collectively express it. In the bazaar of tomorrow there is no signal, only noise.

The Ethical Tradeoffs of Medical Surveillance: Tracking, Compassion, and Moral Formation

photograph of medical staff holding patients hand

Our ability to track doctors – their movements, their location, and everything they accomplish while on the job – is increasing at a rapid pace. Using RFID tags, hospitals are able to not only track patients and medical equipment, but hospital staff as well, allowing administrators to monitor the exact amount of time that physicians spend in exam rooms or at lunch. On top of that, electronic health record systems (EHRs) require doctors to meticulously record the time they spend with patients, demanding that doctors spend multiple hours a day charting. And more could be on the way. Researchers are now working on technology that would track physician eye movement, allowing surveillance of how long a doctor looks at a patient’s chart or test results before making a diagnosis.

There are undeniable benefits to all of this tracking. Along with providing patients and their families with detailed examination notes, such detailed surveillance ensures that doctors are held to a meaningful standard of care even when they are tired or stressed. And workplace accountability is nothing new. Employers have used everything from punch clocks, supervisors, and drug tests to make sure that their staff is performing while on the job.

Yet as the surveillance of physicians becomes ever more ubiquitous, the number of moral concerns increases as well. While tracking typically does improve behavior, it can also stunt our moral growth. Take, for example, plagiarism detectors. If they are 100% accurate at detecting academic dishonesty, then they drastically reduce the incentive to cheat, making it clearly counterproductive for those who want to pass their classes. This will cause most students to avoid plagiarism simply out of sheer self-interest. At the same time though, it robs students of an opportunity to develop their moral characters, relieving them of the need to practice doing the right thing even when they might not get caught.

On the other hand, while school might be an important place to build the virtues, hospitals clearly are not. We want our doctors to be consistently attentive and careful in how they diagnose and treat their patients, and if increased surveillance can ensure that, then that seems like a worthwhile trade-off. Sure, physicians might miss out on a few opportunities for moral growth and formation, but this loss can be outweighed by not leaving it up to chance whether any patients fall through the cracks. If more surveillance means that more patients get what they need, then so be it.

The problem, however, is that surveillance may not mean that hospitals are always getting more quality care, but simply getting more of what they measure. As doctors become more focused on efficient visit times and necessary record-keeping, there is evidence piling up that suggests that technological innovations like EHRs actually decrease the amount of time that physicians spend with their patients. Physicians now spend over 4 hours a day updating EHRs, including over 15 minutes each time they are in an exam room with a patient. Many doctors must also continue charting until late into the night, laboring after hours to stay on top of their work and burning out at ever increasing rates. So, while patient records might be more complete than ever before, time with and for patients has dwindled.

All of this becomes particularly concerning in light of the connection between physician compassion and patient health. Research has shown that when healthcare providers have the time to show their patients compassion, medical outcomes not only improve, but unnecessary costs are reduced as well. At the same time, compassion also helps curtail physician burnout, as connecting with patients makes doctors happier and more fulfilled.

So maybe the moral formation of doctors is not irrelevant after all. If there is a strong link between positive clinical outcomes and doctors who have cultivated a character of compassion (doctors who are also less likely to burn out), then how hospitals and clinics form their physicians is of the utmost importance.

This, of course, raises the question about what this means for how we track doctors. The most straightforward conclusion is that we shouldn’t give physicians so much to do that they don’t have any time for empathy. Driven by an emphasis on efficiency, 56% of doctors already say that they do not have enough time for compassion in their clinical routines. If compassion plays a significant role in providing quality healthcare, then that obviously needs to change.

But an emphasis on compassion and the moral characters of doctors raises even deeper questions about whether medical surveillance is in need of serious reform. It is extremely difficult to measure how compassionate doctors are being with their patients. Simply tracking a certain period of time, or particular eye movements, or even a doctor’s tone of voice might not truly reflect whether doctors are being empathetic and compassionate towards their patients, making it unclear whether more in-depth surveillance could ever ensure the kinds of personal interactions that are best for both doctors and their patients. And as we have seen, whatever metrics hospitals attempt to track, those measures are the ones that doctors will prioritize when organizing their time.

For this reason, it might be that extensive tracking will always subtly undermine the outcomes that we want, and that creating more compassionate healthcare requires a more nuanced approach to tracking physician performance. It may be possible to still have metrics that ensure all patients get a certain baseline of care, but doctors might also need more time and freedom to connect with patients in ways that can never be fully quantified in an EHR.

State of Surveillance: Should Your Car Be Able to Call the Cops?

close-up photograph of car headlight

In June, 2022, Alan McShane from Newcastle, England was heading home after a night drinking and watching his favorite football club at the local pub when he clipped a curb and his airbags were activated. The Mercedes EQ company car that he was driving immediately called emergency services, a feature that has come standard on the vehicle since 2014. A sobriety test administered by the police revealed that the man’s blood alcohol content was well above the legal limit. He was fined over 1,500 pounds and lost his driving privileges for 25 months.

No one observed Mr. McShane driving erratically. He did not injure anyone or attract any attention to himself. Were it not for the actions of his vehicle, Mr. McShane may very well have arrived home safely and without significant incident.

Modern technology has rapidly and dramatically changed the landscape when it comes to privacy. This is just one case among many which demonstrates that technology may also pose threats to our rights against self-incrimination.

There are compelling reasons to have technology of this type in one’s vehicle. It is just one more step in a growing trend toward making getting behind the wheel safer. In the recent past, people didn’t have cell phones to use in case of an emergency; if a person got in a car accident and became stranded, they would have to simply hope that another motorist would find them and be willing to help them. However, this significant improvement to safety isn’t always accessible during a crash. One’s phone may not be within arm’s reach and during serious car accidents a person may be pinned down and unable to move. Driving a car that immediately contacts emergency services when it detects the occurrence of an accident may often be the difference between life and death.

Advocates of this technology argue that a person simply doesn’t have the right to drive drunk. It may be the case that under many circumstances a person is free to gauge the amount of risk that is associated with their choices and then choose for themselves the amount that they are willing to take on. This simply isn’t true when it comes to risk that affects others in serious ways.

A person doesn’t have the right to just cross their fingers and hope for the best — in this case to simply trust that they don’t happen to encounter another living being while driving impaired.

When people callously rely on luck when it comes to driving under the influence, living beings can die or be injured in such a way that their lives are involuntarily altered forever. Nevertheless, many people simply do not think about the well-being of others when they make their choices. Since this is the case, some argue that if technology can protect others from the selfish and reckless actions of those who can’t be bothered to consider interests other than their own, it should.

Others argue that we can’t let technology turn any country into a police state. Though such people agree that there are clear safety advantages to technology that can help a person in the event of an accident, this particular technology does more than that — it serves as a non-sentient witness against the driver. This radically changes the role of the car. A vehicle may once have been viewed as a tool operated by a person — a temporary extension of that person’s body. Often cars used as tools in this way are the property of their operators. Until now, a person’s own property hasn’t been in the position to turn them in. Instead, if a police officer wanted information about some piece of a person’s body, they’d need a search warrant. This technology removes the element of choice on behalf of the individual when it comes to the question of whether they want to get the police involved or to implicate themselves in a crime.

This is far from the only technology we have to be worried about when it comes to police encroachment into our lives and privacy. Our very movement through our communities can be tracked by Google and potentially shared with police if we agree to turn location services on when using our phones.

Do we really have a meaningful expectation of privacy when all of the devices we use as extensions of our bodies are accessible to the police?

Nor is it only the police that have access to this information. In ways that are often unknown to the customer, information about them is frequently collected and used by corporations and then manipulated to motivate that customer to spend more and more money on additional products and services. Our technology isn’t working only for us, it’s also working for corporations and the government, sometimes in ways that pretty clearly run counter to our best interests. Some argue that a product on which a person spends their own hard-earned money simply shouldn’t be able to do any of this.

What’s more, critics argue that the only conditions under which technology should be able to share important information with any third party is if the owner has provided fully free and informed consent. Such critics argue that what passes for consent in these cases is nowhere near what would be required to meet this standard.

Accepting a long list of terms and conditions written in legalese while searching for crockpot recipes at the grocery store isn’t consenting to allowing police access to knowledge about your location.

Turning a key in the ignition (or, more and more often, simply pressing an ignition button) does not constitute consent to abandon one’s rights against self-incrimination or to make law enforcement aware of one’s blood alcohol content.

Advocates of such technology argue in response that technology has always been used as important evidence in criminal cases. For instance, people must be careful what they do in public, lest it be captured on surveillance cameras. People’s telephone usage has been used against them since telephones were invented. If one does not want technology used against them in court, one shouldn’t use technology as part of the commission of a crime.

In response, critics argue that, as technology develops, it has the potential to erode our fourth amendment rights against unlawful search and seizure and our fifth amendment rights against self-incrimination to the point of meaninglessness. Given our track record in this country, this erosion of rights is likely to disproportionately affect marginalized and oppressed populations. It is time now to discuss principled places to draw defensible lines that protect important democratic values.

The Ethics of Digidog

photograph of german shepherd next to surveillance robot dog

On February 24, the New York City Police Department employed a robot “dog” to aid with an investigation of a home break in. This is not the first time that police departments have used robots to aid in responding to criminal activity. However this robot, produced by Boston Dynamics, and affectionately named “Spot,” drew the attention of the general public after New York Congresswoman Alexandria Ocasio-Cortez tweeted a critique of the decision to invest money in this type of policing technology. While Digidog’s main purpose is to increase the safety for both officers and suspects during criminal investigations, some are concerned that the implementation of this type of technology sets a bad precedent, contributes to police surveillance, and diverts resources away from more deserving causes.

Is employing surveillance technologies like Digidog inherently bad? Is it ever okay to monitor citizens without their consent? And which methods should we prioritize when seeking to prevent or respond to crime?

In the United States, an estimated 50 million surveillance cameras watch us, many escaping out notice. The use of these surveillance cameras, formally called Closed Circuit TV’s (CCTVs), have dramatically expanded due to the interest in limiting crime in or around private property. Law enforcement often rely on video surveillance in order to identify potential suspects, and prosecutors may use this footage as evidence during conviction. However, there has been some debate about whether or not the proliferation of CCTV’s has really led to less crime, either through deterrence or through successful identification and capture. This lack of demonstrable proof of positive effect is especially concerning given the pushback against this type of surveillance as potentially violating individuals’ privacy. In a 2015 study by Pew Research Center, 90% of Americans ranked “not having someone watch you or listen to you without your permission” as very important or somewhat important. In a follow-up question, 2/3 of respondents ranked “being able to go around in public without always being identified” as important. Considering the clear importance of privacy to many Americans, increased surveillance might be considered a fundamental infringement on what many see as their right to privacy.

Digidog is a police surveillance tool. Equipped with a camera and GPS, the robot dog is capable of allowing officers to monitor dangerous situations without risking their lives. However, many are skeptical that Digidog’s use will be limited only to responding to crime and could soon instead become a tool to patrol the streets. In one particularly alarming example, an MSCHF Product Studio armed their robot dog with a paintball gun and demonstrated how easy it was to shoot the gun from a remote location. If police departments began arming these robot dogs, the potential for violence and brutality stands to increase. As yet, these uses have not been explicitly suggested by any law enforcement agency, and defenders of Digidog point to its limitations, such as its maximum travel speed, usage in many fields other than policing, and its lack of covert design. These features suggest that Digidog is not yet the powerful tool to patrol or surveil the general public that critics fear.

In terms of Digidog as an investment to combat crime, is it an unethical diversion of money, as Congresswoman Ocasio-Cortez has suggested? In the past year, calls to decrease or reallocate police funding have entered the mainstream. The decision to invest in Digidog could be considered unethical because its benefits to the public can’t justify its significant cost. Digidogs themselves cost around $74,000 each. Considering that they are only intended for use in extreme and dangerous situations, their usage is rare, and they do not appear to improve the life of the average individual. However, by serving as a weaponless first responder, Digidogs could save both the lives of officers or those suspected of engaging in criminal activity. Human error and reactivity can be removed from the equation by having robots surveil a situation in place of an armed officer.

Whether or not the Digidog represents an ethical use of public funds may turn on the legitimacy of investing in crime response rather than crime prevention. As previously noted, “Spot” is primarily used to respond to existing crime. Because of this, critics have suggested that these funds would be better aimed at programs that seek to minimize the occurrence of crime in the first place. Congresswoman Ocasio-Cortez’s tweet, for example, makes reference to those who suggested resources should go to school counseling instead. In fact, some criminology experts argue that investing in local communities and schools can drastically decrease the incidence of crime. Defenders of Digidog are quick to point out that the two goals are not mutually exclusive; it is possible to invest in both crime response and crime prevention, and we need not pit these two policy aims against one another. It is unclear in this situation, however, whether similar funds were directed at preventing crime as were spent in purchasing Digidogs.

This investment in Digidog could also be seen as unethical not just in terms of its lack of efficiency in addressing crime, but also in terms of the lack of similar treatment in other areas of social concern. In a reply to her original tweet, Ocasio-Cortez retorted “when was the last time you saw next-generation, world class technology for education, healthcare, housing, etc. consistently prioritized for underserved communities like this?” In a time when so many have called to defund the police following centuries of police violence against Black people, it seems an affront to invest money in technologies designed to aid arrest rather than address systemic injustice. Highlighting this disparity in funding shows that urgent social needs are being unjustly prioritized last. Again, defenders of Digidog might respond that this comparison is a false one, and that technology can be employed for both policing and social needs.

Together, these concerns mean that Digidog’s usage will continue to be met by skepticism, if not hostility, by many. As police and surveillance technology develop, it remains especially important that we measure the value these new tools offer against their costs to both our safety and our privacy.

Sensorvault and Ring: Private-Sector Data Collection Meets Law Enforcement

closeup photograph of camera lens

Concerns over personal privacy and security are amplifying as more information surfaces about the operations of Google’s Sensorvault, Amazon’s Ring, and FamilyTreeDNA.

Sensorvault, Google’s enormous database, stands out from the group as a major player in the digital profiling arena. Since at least 2009, it has been amassing data and constructing individual profiles for all of us based on vast information about our location history, hobbies, race, gender, income, religion, net worth, purchase history, and more. Google and other private-sector companies argue that the amassment of digital dossiers facilitates immense improvements in their efficiency and profits. However, the collection of such data also raises thorny ethical concerns about consent and privacy.

With regard to consent, the operation of Sensorvault is morally problematic for three main reasons. First, the minimum age required for managing your own Google account in North America is 13, meaning that Google can begin constructing the digital profiles of children, despite the likelihood that they are unable to comprehend the Terms and Service agreement or its implications. Their digital files are thus created prior to the (legal) possibility of providing meaningful consent.

Second, the dominance of Google’s Search Engine, Maps, and other services are making it increasingly less feasible to live a Google-free life. In the absence of a meaningful exit option, the value of supposed consent is significantly diminished. Third, as law professor Daniel Solove puts it, “Life today is fueled by information, and it is virtually impossible to live as an Information Age ghost, leaving no trail or residue.” Even if you avoid using all Google services, your digital profile can and will still be constructed from other data point references about your life, such as income level or spending habits.

The operation of Sensorvault and similar databases also raise moral concerns about individual privacy. Materially speaking, the content in Sensorvault puts individuals at extreme risks of fraud, identity theft, public embarrassment, and reputation damage, given the detailed psychological profiles and life-patterns contained in the database. Google’s insistence that protective safeguards are in place is not particularly persuasive either in light of recent security breaches, such as Social Security numbers and health information of military personnel and their families being stolen from a United States Army Base.

More abstractly, these data collection agencies represent an existential threat to our private selves. Solove argues in his book “The Digital Person” that the digital dossiers amassed by private corporations are eerily reflective of the files that Big Brother has on its citizens in 1984. He also makes a comparison between the secrecy surrounding these profiles and The Trial, in which Kafka warns of the dangers of losing control over personal information and enabling bureaucracies to make decisions about our lives without us being aware.

The stakes are growing increasingly high as Google, Amazon, and FamilyTreeDNA move beyond using data collection for their own purposes and are now collaborating with law enforcement agencies. These private companies attempt to justify their practices on the grounds that they are a boon to policing practices and are effectively helping to solve and deter crime. However, even if you are sympathetic to their justification, there are still significant ethical and legal reasons to be concerned by the growing relationship between data collecting private-sector companies and law enforcement agencies.

In Google’s case, the data in Sensorvault is being shared with the government as part of a new policing mechanism. American law enforcement agencies have recently started issuing “Geofence warrants” which grant them access to the digital trails and location patterns left by individuals’ devices in a specific time and area, or “geofence.” Geofencing warrants differ significantly from traditional warrants because they permit law enforcement to obtain access to Google user’s data without probable cause. According to one Google employee, “the company responds to a single warrant with location information on dozens or hundreds of devices,” thus ensnaring innocent people in a digital dragnet. As such, Geofencing warrants raise significant moral and legal concerns in that they circumvent the 4th Amendment’s protection of privacy and probable cause search requirement.

Amazon’s Ring (a home surveillance system) is also engaged in morally problematic relations with law enforcement. They have partnered with hundreds of departments in the US to provide police with data from their customers’ home security systems. Reports suggest that Ring has shared the locations of their customers’ homes with law enforcement, is working on enabling police to automatically activate Ring cameras in an area where a crime has been committed, and that Amazon is even coaching police on how to gain access to user’s cameras without a warrant.

FamilyTreeDNA, one of the country’s largest genetic testing companies, is also putting consumers’ privacy and security at risk by providing its data to the FBI. FamilyTree has offered DNA testing for nearly two decades, but in 2018, it willingly granted law enforcement access to millions of consumer profiles, many of which were collected before users were aware of the company’s collaboration with law enforcement. While police have long been using public genealogy databases to solve crime, FamilyTree’s partnership with the FBI marks one of the first times a private-sector database has willingly shared the sensitive information of its consumers with governmental agencies.

Several strategies might be pursued to mitigate the concerns raised by these companies regarding consent, privacy, and law enforcement collaboration. First, the US ought to consider adopting safeguards similar to the EU’s General Data Protection Regulations which, for example, sets the minimum age of consent for Google Users at 16 and stipulates that Terms of Service “should be provided in an intelligible and easily accessible form, using clear and plain language and it should not contain unfair terms.” Second, all digital and DNA data collecting companies should undergo strict security testing to protect against theft, fraud, and the exposure of personal information. Third, given the extremely private and sensitive nature of such data, regulations ought to be enacted to prevent private companies like Family Tree from sharing profiles they amassed before publicly disclosing their partnership with law enforcement. Fourth, the US Congress Committee on Energy and Commerce should continue to monitor and inquire into companies as they did in their 2019 letter to Google. There needs to be greater transparency regarding what data is being stored and for what purposes. Finally, the 4th Amendment must become a part of the mainstream conversation regarding the amassing of digital dossiers, DNA profiles, and the access to such data by law enforcement agencies without probable cause.

Government Leakers: Liars, Cowards, or Patriots?

James Comey, former Director of the FBI, recently testified in front of the Senate Intelligence Committee regarding conversations that he had with President Trump. The public knew some of the details from these conversations before Comey’s testimony, because he had written down his recollections in memos, and portions of these memos were leaked to the press. We now know that Comey himself was responsible for leaking the memos. He reportedly did so to force the Department of Justice to appoint a special prosecutor. It turned out that his gamble was successful, as Robert Mueller was appointed special prosecutor to lead the investigation into possible collusion between the Trump campaign and the Russian government.

After the testimony, President Trump blasted Comey as a Leaker. He tweeted, “Despite so many false statements and lies, total and complete vindication…and WOW, Comey is a leaker!” Trump later tweeted that Comey’s leaking was “Very ‘cowardly!’” Trump’s antipathy towards leaking makes sense against the background of the unprecedented number of leaks occurring during his term in office. It seems as if there is a new leak every day. Given the politically damaging nature of these leaks, supporters of the president have been quick to condemn them as endangering national security, and to call for prosecutions of these leakers. Just recently, NSA contractor Reality Winner was charged under the Espionage Act for leaking classified materials to the press. However, it is worth remembering that, during the election campaign, then-candidate Trump praised Wikileaks on numerous occasions for its release of the hacked emails from the Democratic National Committee.

A cynical reading of this recent chain of events suggests that the stance that government figures take towards the ethics of leaking is purely motivated by politics. Leaking is good when it damages a political opponent. Leaking is bad when it damages a political ally.  Sadly, this may be a true analysis of politicians’ shifting stances towards leakers. However, it does not answer the underlying question as to whether leaking can ever be morally permissible and, if it can be, under what circumstances might it be?

Approaches may differ, but I think it is reasonable to ask this question in a way that assumes that government leaking requires special justification. This is for two reasons. First, the leaking of classified information is almost always a violation of federal law. Leaking classified information violates the Espionage Act, which sets out penalties of imprisonment for individuals who disclose classified information to those not entitled to receive it. As a general moral rule, individuals ought to obey all laws, unless a special justification exists for their violation. General conformity to the law ensures an order and stability necessary to the safety, security, and well-being of the nation. More specifically, the Espionage Act is intended to protect the nation’s security. Leaking classified information to the press risks our nation’s intelligence operations by potentially exposing our sources and methods to hostile foreign governments.

Second, as Stephen L. Carter of Bloomberg points out, “leakers are liars,” and there is a strong moral presumption against lying. Carter provides a succinct explanation: “The leaker goes to work every day and implicitly tells colleagues, ‘You can trust me with Secret A.’ Then the leaker, on further consideration, decides to share Secret A with the world. The next day the leaker goes back to work and says, ‘You can trust me with Secret B.’ But it’s a lie. The leaker cannot be trusted.”

The strong presumption against lying flows from the idea that morality requires that we do not make an exception of ourselves in our actions. We generally want and expect others to tell us the truth; we have no right ourselves, then, to be cavalier with the truth when speaking with others. Lying may sometimes be justified, but it requires strong reasons in its favor.

Ethical leaking might be required to meet two standards: (A) the leak is intended to achieve a public good that overrides the moral presumption lying and law-breaking, or (B) leaking is the only viable option to achieving this public good. What public good does leaking often promote? Defenders of leaks often argue that leaking reveals information that the public needs to know to hold their leaders accountable for wrongdoing. Famous leaker Edward Snowden, for example, revealed information concerning the surveillance capabilities of the National Security Agency (NSA); it is arguable that the public needed to know this information to have an informed debate on the acceptable limits of government surveillance and its relation to freedom and security.

Since leaking often involves lying and breaking the law, it must be considered whether other options exist, besides leaking, to promote the public good at issue. Government figures who criticize leakers often claim that they have avenues within the government to protest wrong-doing. Supporters of Snowden’s actions pointed out, however, that legal means to expose the NSA’s surveillance programs were not open to him because, as a contractor, he did not have the same whistleblower protections as do government employees and because NSA’s programs were considered completely legal by the US government at the time. Leaking appeared to be his only viable option for making the information public.

Each act of leaking appears to require a difficult moral calculation. How much damage will my leaking do to the efforts of the national security team? How important is it for the public to know this classified information? How likely is it that I could achieve my goals through legal means within the government system? Though a moral presumption against leaking may exist—you shouldn’t leak classified information for just any old reason—leaking in the context of an unaccountable government engaged in serious wrongdoing has been justified in the past, and I expect we will see many instances in the future where government leaks will be justified.

Law Enforcement Surveillance and the Protection of Civil Liberties

In a sting operation conducted by the FBI in 2015, over 8,000 IP addresses in 120 countries were collected in an effort to take down the website Playpen and its users. Playpen was a communal website that operates on the Dark Web through the Tor browser. Essentially, the site was used to collect images related to child pornography and extreme child abuse. At its peak, Playpen had a community of around 215,000 members and more than 117,000 posts, with 11,000 unique visitors a week.

Continue reading “Law Enforcement Surveillance and the Protection of Civil Liberties”

Richard Mosse and the Ethics of Photographing Crisis

The ongoing Syrian refugee crisis has raised ethical concerns surrounding immigration, borders, and terrorism. However, one less-discussed ethical dilemma surrounding refugees is that of photojournalism and art. Irish photographer Richard Mosse made headlines last week after publishing photographs taken of refugee camps using cameras with military grade thermal radiation. The photographs are extremely detailed and might even portray a sense of voyeurism.

Continue reading “Richard Mosse and the Ethics of Photographing Crisis”

Making Sense of Trump’s Wiretapping Accusations

At 3:35am on March 4, President Donald Trump tweeted an accusation that former President Barack Obama wiretapped the phones in Trump Tower prior to the election. Trump compared it to Watergate and called Obama “sick.” A spokesperson for Obama quickly and strongly denied the allegations, stating that “neither President Obama nor any White House official ever ordered surveillance on any U.S. citizen.” FBI Director James Comey asked the Justice Department to immediately reject the president’s allegations on the grounds that it falsely implies that the FBI broke the law.

Continue reading “Making Sense of Trump’s Wiretapping Accusations”

On Drones: Helpful versus Harmful

During the Super Bowl halftime show this past month, Lady Gaga masterfully demonstrated one of the most unique mass uses of drones to date. At the conclusion of her show, drones powered by Intel were used to form the American flag and then were rearranged to identify one of the main sponsors of the show, Pepsi. This demonstration represented the artistic side of drones and one of the more positive images of them.

Continue reading “On Drones: Helpful versus Harmful”