← Return to search results
Back to Prindle Institute

Are Tipped Workers Less Free?

photograph of server being presented dollar bills

At a Las Vegas rally in June, Donald Trump floated a proposal to his crowd: No federal income taxes on all wages earned as tips. This was likely an attempt to appeal to local workers in the hospitality industry, which employs up to 26% of workers in the city. Nevada is a battleground state, after all. Regardless, the idea has gained traction. The Trump campaign has since formally adopted the policy, and Kamala Harris’ campaign has endorsed a similar policy.

Despite both major party presidential campaigns supporting this initiative, reaction to it has been mixed at best. Of course, those who would receive tax breaks are rarely opposed to them. Further, employees that earn part of their wages in tips are likely to be in lower income brackets, making this a tax break for the poor. However, economists are skeptical. Many tipped workers already earn so little income that they pay no federal income tax making this a tax break for them in name but not in practice. Others worry that this proposal is unfair; two workers who earn the same income may pay different tax rates simply based on the industry in which they work. Further, it could create a perverse incentive structure, leading employees to seek a greater portion of wages in tips, and to reclassify other kinds of wages as tips – a high-powered attorney may only earn minimum wage from her law firm, but receive very significant gratuities from clients at each stage of the legal process.

Ultimately, this conversation is taking place during a larger cultural flashpoint about tipping practices, at least in the United States. Many Americans feel that tipping is everywhere and the data support this feeling; tipping has become far more prevalent in recent years, due (in part) to increased tipping during the COVID pandemic, devices like tablets replacing traditional cash registers, and businesses seeking to increase workers’ wages without raising menu prices for consumers. Many of us feel confusion, anxiety, and anger at being asked to tip so often, even at self-checkout. 

However, I want to explore something that goes unprobed during these conversations about tipping. I think that receiving a significant portion of wages as tips is, in fact, bad for the tipped employee. And it may be bad in a way you might not expect – I contend that tipped workers are less free than workers with a regular wage. In order to explain this, though, I must first ask you to take what may seem like a detour.

Imagine an absolute despot, say, a king who rules over a nation with unquestioned authority. This king has a single favorite subject, the jester in his court. So long as the king is pleased with the jester’s performances, the king allows the jester to do whatever he pleases. Other citizens, however, are subject to laws which vastly restrict their life’s options. They lack freedoms such as freedom of speech, of assembly, and of religion. Further, the king changes the laws on a whim, leaving the citizens uncertain of what they will be able to do in the future, inhibiting their ability to form long-term plans and pursue what they find valuable. The jester occupies a unique position where he is never forced to comply with the king’s whims, and never punished if he violates the law. Of course, this could change. If the jester falls out of favor with the king, he will become just like any other subject.

Is the jester free? There are some senses in which he is. First, he has negative freedom, since the king does not interfere with him. Second, he has positive freedom – he is free to choose what he would like to do, and he can develop plans about his future, given that his options are not limited in the way that both the content and the unpredictability of the king’s orders limit others. Nonetheless, it seems like the jester is not completely free. His position is precarious, conditional on continuing to please the king. If one joke doesn’t land, if he has one off-performance, his freedom may be stripped away. In other words, his freedom is dependent on the arbitrary whims of the king.

The jester lacks what some call republican freedom. Having freedom in this sense deals with non-domination – you are free only when others do not wield power over you, such that they could interfere with your interests even if they are not actively doing so. In other words, you lack this freedom if your life and projects are subject to the whims and depend on the good graces of others. Philip Pettit describes republican unfreedom as follows:

The grievance I have in mind is that of having to live at the mercy of another, having to live in a manner that leaves you vulnerable to some ill that the other is in a position to arbitrarily impose… It is the grievance expressed by the wife who finds herself in a position where her husband can beat her at will and without any possibility of redress… and by the welfare dependent who finds they are vulnerable to the caprice of a counter clerk for whether or not their children will receive meal vouchers.

With this in mind, we can more clearly see the moral problem with allowing people to depend on tips for a significant portion of their income. Their wages are not wholly guaranteed, nor are they the product of a mutual agreement purely between employee and employer. Their livelihood depends on the arbitrary whims of customers. In most states, the minimum wage for tipped employees is at least five dollars per hour lower than the minimum wage for non-tipped employers – in some, there is a greater than ten dollar per hour difference. (Federal law requires that employers make up the difference should employees not make enough in tips to meet the federal minimum wage.)

This dependence on customers for wages leads to significant burdens for tipped employees. Since tips are at the discretion of customers, one may be denied their wages for wholly arbitrary reasons; for instance, attractive servers may receive greater tips and Black servers may receive fewer tips than white servers. The dependence tipped employees have on the whims of customers gives customers greater power, and this power may have concerning consequences. A study from Restaurant Opportunities Center United (ROCU) found that nearly 80% of female and a majority of male employees in the restaurant industry claim to have been sexually harassed by customers; tipped workers were more likely to experience sexual harassment than non-tipped workers, and sexual harassment was significantly more likely in states with the lowest non-tipped minimum wage of $2.13 an hour. One server, in describing her experience with sexual harassment stated that she has “more freedom” to tell co-workers to stop, but that harassment from a customer makes her “feel a lot more powerless. That’s when I’m, like, man, that’s where my money’s coming from.” Kundro et al. found in environments where staff have greater perceived dependence on tips, and perform more emotional labor, customers are perceived to have more power. Both customers and the staff themselves perceive this power difference. An increased perception of customer power is also correlated with greater frequency of sexual harassment by customers toward staff.

The more staff depend on tips for their wages, the greater power their customers have over them. The greater this power, the more vulnerable, both psychologically and physically, the staff are to the arbitrary whims of the customer. Tipped workers have their republican freedom lessened by their dependence on customers to directly provide their wages.

Of course, no one customer has domineering power over tipped workers in the way that the despotic king has such power over his subjects. A single customer cannot damage their finances in the way that a king can reshape one’s life. Nonetheless, this does not change the extent to which the tipped worker is subject to the arbitrary whims of others, it merely distributes this power across all of her customers. Unlike our jester, the tipped worker does not have to keep one person happy; she has to keep everyone happy. As a result, she is like a jester in all of our courts, at least for an hour or two at a time. Her lessened freedom and her lack of comparative power may damage her status as an equal and may even continue even when she no longer depends on tips – the same ROCU study sound that women who formerly worked as tipped employees were 1.6 more likely to tolerate inappropriate behavior in future employment than women who had not previously worked as a tipped employee.

There are many reasons to criticize the practice of tipping. It is often confusing when and how you should tip; even though it is meant to reward excellent service, its role in our service culture may leave you tipping despite inadequate service. It may have its roots in racist practices. And tipping is, ultimately, a hidden cost; when you tip, you are just paying the wages of the employee in lieu of the business.

Yet these criticisms obscure perhaps the most important point, the ways in which the practice is bad for tipped employees. Their wages are unstable, perhaps unpredictable, and they are subject to the arbitrary whims of customers if they are to earn those wages. When someone works for tips, they are made dependent upon their customers in a way which may make them less free. The practice may have no place in a society of equals.

What Role Should AI Play in War?

photograph of drone

This month, officials from over 60 nations met and agreed on a blueprint to govern the use of artificial intelligence in the military. Countries like the United States, the Netherlands, South Korea, and the United Kingdom signed an agreement stating that “AI applications should be ethical and human-centric.” (China was a notable holdout.) The agreement governs issues like risk assessments, human control, and the use of AI for weapons of mass destruction. With AI already being used by militaries and with such a wide variety of potential applications, significant questions and fears abound. For some, the technology holds the promise of ending wars more efficiently and (perhaps) with fewer casualties. Others, meanwhile, fear a Manhattan Project moment for the world that could change warfare forever if we are not careful.

The thought of bringing artificial intelligence to the battlefield often conjures the image of “killer robots.” And while there have been moves to create robotic military units and other forms of lethal autonomous weapon systems (LAWs), there are a great many potential military uses for artificial intelligence – from logistics and supply chain matters to guided missile defense systems. In the war zones of Ukraine and Gaza, AI has been increasingly utilized for the purposes of analyzing information from the battlefield to identify targets for drone strikes. There is also, of course, the possibility of applying AI to nuclear weapons to ensure an automated response as part of a mutually assured destruction strategy.

Given such a wide variety of potential applications, it is difficult to assess the various ethical drawbacks and benefits that AI may afford. Many argue that the use of AI will lead to a more efficient, more accurate, and more surgical form of warfare, allowing nations to fight wars at a lower cost and with less risk of collateral damage. If true, there could be humanitarian benefits as autonomous systems may not only minimize casualties on the opposing side, but it may keep one’s own human forces from being put in harm’s way. This not only includes physical harm, but also long-term psychological harm as well. There is also the argument that automated defense systems will be better able to respond to potential threats, particularly when there are concerns about swarms or dummy targets overwhelming human operators. Thus, the application of AI may lead to greater safety from international threats.

On the other hand, the application of AI to war-making poses many potential ethical pitfalls. For starters, making it easier and more efficient to engage in war-making might incentivize states to do it more often. There is also the unpredictable nature of these developments to consider, as smaller nations may find that they can manufacture cheap, effective AI-powered hardware that could upset the balance of military power on a global scale. Some argue that the application of AI for autonomous weapons represents another “Oppenheimer moment” that may forever change the way war is waged.

Another significant problem with using AI for military hardware is that AI is well-known for being susceptible to various biases. This can happen either because of short-sightedness on the part of the developer or because of limitations and biases within the training data used to design these products. This can be especially problematic when it comes to surveillance and for identifying potential targets and distinguishing them from civilians. The problem is that AI systems can misidentify individuals as targets. For example, Israel relied on an AI system to determine targets despite the fact that it made errors in about 10% of cases.

AI-controlled military hardware may also create an accountability gap when it comes to the use of the technology. Who should we hold accountable when an AI-powered weapon mistakenly kills a civilian? Even in situations where a human remains in control, there are concerns that AI can still influence human thinking in significant ways. This raises questions about ensuring accountability for military decisions and to ensure that they are in keeping with international law.

Another serious concern involves the opacity that exists within AI military systems. Many are built according to black box principles such that we cannot explain why an AI system reached the conclusion that it did. These systems are also classified, making it difficult to identify the responsible party for poorly designed and poorly functioning AI systems. This creates what has been described as a “double black box” which makes it all but impossible for the public to know if these systems are operating correctly or ethically. Without that kind of knowledge, democratic accountability for government decisions.

Thus, while AI may offer promise for greater efficiency and potentially even greater accuracy, it may come at great cost. And these tradeoffs seem especially difficult to navigate. If, for example, we knew an AI system had a 10% error rate, but that a human error rate is closer to 15 or 20%, would that fact prove decisive? Even given the concerns for AI accountability? When it comes to military matters the risks of error carry enormous weight, but does that make it more reckless to use this unproven technology or more foolhardy to forgo the potential benefits?

 

Has AI Made Photos Untrustworthy?

Since the widescale introduction and adoption of generative AI, AI image generation and manipulation tools have always felt a step behind the more widely used chatbots. While publicly available apps have become more and more impressive over time, whenever you would come across a truly spectacular AI-generated image it was likely created by a program that required a bit of technical know-how to use, or at least had a few hoops that you had to jump through.

But these barriers have been disappearing. For example, Google’s Magic Editor, available on the latest version of their Pixel line of phones, provides users with free, powerful tools that can convincingly alter images, with no tech-savviness required. It’s not hard to see why these features would be attractive to users. But some have worried that giving everyone these powers undermines one of our most important sources of evidence.

If someone is unsure whether something happened, or people disagree about some relevant facts, a photograph can often provide conclusive evidence. Photographs serve this role not only in mundane cases of everyday disagreement but when the stakes are much higher, for example in reporting the news or in a court of law.

The worry, however, is that if photos can be so easily manipulated – and so convincingly, and by anyone, and at any time – then the assumption that they can be relied upon to provide conclusive evidence is no longer warranted. AI may then undermine the evidential value of photos in general, and with it a foundational way that we conduct inquiries and resolve disputes.

The potential implications are widespread: as vividly illustrated in a recent article from The Verge, one could easily manipulate images to fabricate events, alter news stories, and even implicate people in crimes. Furthermore, the existence of AI image-manipulating programs can cause people to doubt the veracity of genuine photos. Indeed, we have already seen this kind of doubt weaponized in high-profile cases, for example when Trump accused the Harris campaign of posting an AI-generated photo to exaggerate the crowd size at an event. If one can always “cry AI” when a photo doesn’t support one’s preferred narrative, then baseless claims that would have otherwise definitively been disproven can more easily survive scrutiny.

So have these new, easy-to-use image-manipulating tools completely undermined the evidential value of the photograph? Have we lost a pillar of our inquiries, to the point that photos should no longer be relied upon to resolve disputes?

Here’s a thought that may have come to mind: tools like Photoshop have been around for decades, and worries around photo manipulation have been around for even longer. Of course, a tool like Photoshop requires at least some know-how to use. But the mere fact that any photo we come across has the potential of having been digitally manipulated has not, it seems, undermined the evidential value of photographs in general. AI tools, then, really are nothing new.

Indeed, this response has been so common that The Verge decided to address it in a separate article, calling it a “sloppy, bad-faith argument.” The authors argue that new AI tools are importantly dissimilar to Photoshop: after all, it’s likely that only a small percentage of people will actually take the time to learn how to use Photoshop to manipulate images in a way that’s truly convincing, so giving everyone the power of a seasoned Photoshop veteran with no need for technical know-how represents not merely a different degree of an existing problem, but a new kind of problem altogether.

However, even granting that AI tools are accessible to everyone in a way that Photoshop isn’t, AI will still not undermine the evidential value of photographs.

To see why, let’s take a step back. What is a photo, anyway? We might think that a photo is an objective snapshot of the world, a frozen moment in time of the way things were, or at least the way they were from a certain point of view. In this sense, viewing a photo of something is akin to perceiving it, as if it were there in front of you, although separated in time and space.

If this is what photos are then we can see how they could serve as a definitive and conclusive source of evidence. But they aren’t really like this: the information provided by a photo can’t be interpreted out of context. For instance, photos are taken by photographers, who choose what to focus on and what to ignore. Relying on photos for evidence requires that we not simply ask what’s in the photo, but who took it, what their intentions were, and if they’re trustworthy.

Photos do not, then, provide evidence that is independent of our social practices: when we rely on photos we necessarily rely on other people. So if the worry is that new AI tools represent a fundamental change in the way that we treat photos as evidence because we can no longer treat photos as an objective pillar of truth, then it is misplaced. Instead, AI imposes a requirement on us when drawing information from photos: part of determining the evidential value of a photo will now partly depend on whether we think that the source of the photo would try to intentionally mislead us using AI.

The fact that we evaluate photographs not as independent touchpoints of truth but as sources of information in the context of our relationships with other people explains why few took seriously Trump’s claim that the photo of Harris’ supporters was AI-generated. This was not because the photo was in any sense “clearly” or “obviously” real: the content of the photo itself could very well have been generated by an AI program. But the fact that the accusations were made by Trump and that he has a history of lying about events depicted in photographs, as well as the fact that there were many corroborating witnesses to the actual event, means that the photo could be relied upon.

So new AI programs do, in a way, make our jobs as inquirers harder. But they do so by adding to problems we already have, not by creating a new type of problem never before seen.

But perhaps we’re missing the point. Is it not still a blow to the way we rely on photos that we now have a new, ever-present suspicion that any photo we see could have been manipulated by anyone? And isn’t this suspicion likely to have some effect on the way we rely on photographic evidence, the ways we settle disputes, and corroborate or disprove different people’s versions of events?

There may very well be an increasing number of attempts at appealing to AI to discredit photographic evidence, or to attempt to fabricate it. But compare our reliance on photographs to another form of evidence: the testimony of other people. Every person is capable of lying, and it is arguably easy to do so convincingly. But the mere possibility of deception does not undermine our general practices of relying on others, nor does it undermine the potential for the testimony of other people to be definitive evidence – for example, when an eyewitness provides evidence at a trial.

Of course, when the stakes are high, we might look for additional, corroborating evidence to support someone’s testimony. But the same is the case with photos, as the evidential value of a photograph cannot be evaluated separately from the person who took it. So as the ever-present possibility of lying has not undermined our reliance on other people, the ever-present possibility of AI manipulation will not undermine our reliance on photographs.

This is not to deny that new AI image-manipulating tools will cause problems. But the argument that they will cause brand new problems because they create doubts that undermine a pillar of inquiry, I argue, relies upon a misconception of the nature of photos and the way we rely on them as evidence. We have not lost a pillar of truth that provides objective evidence that has up until recently been distinct from the fallible practice of relying on others, since photographs never served this role. New AI tools may still create problems, but if they do, they can still be overcome.

Should Space Belong to Everyone?

photograph of dish antenna pointed to stars

The prospect of space mining is only becoming more likely. Not only have nations like China and Russia started making moves, but the language of the Artemis Accords leave the door very much open. Even NASA is now talking about a “lunar gold rush.”

But there’s a problem. Most treaties that govern human action beyond Earth’s atmosphere – like the Outer Space Treaty and the Moon Treaty – prohibit nations from appropriating outer space. Instead, these treaties claim that the stars belong to all of humanity. What is to be done? What is practically enforceable? Should private ownership of a shared resource be permitted? How should we address the legal gap?

The Outer Space Treaty was signed during the Cold War as the main spacefaring nations (the United States and the Soviet Union) were seeking to de-escalate things. It not only banned appropriation – as the exploration and use of outer space was to be “carried out for the benefit and in the interests of all countries” – but also banned the deployment of nuclear weapons in space as well. Likewise, the Moon Treaty makes the Moon and its resources “the common heritage of mankind.” It bans appropriation while calling for the development of an international regime to govern the exploitation of natural resources and an “equitable sharing” of those benefits. It has not been ratified by any major spacefaring nation.

The former Trump administration, for instance, stated that “the United States does not view [outer space] as a global commons” but rather seeks support for “the public and private recovery of resources.” Similarly, the 2015 Commercial Space Launch and Competitiveness Act explicitly endorses U.S. citizens engaging in the exploration and exploitation of space resources privately. (Luxembourg, Japan, Saudi Arabia, and the UAE have passed similar laws.) But this sets up an awkward gray zone in international law: nations cannot claim sovereignty, but citizens can claim exclusive right to extract resources and profit from them.

There are good reasons to want to preserve the ideal that space belongs to everyone. For starters, banning the nations from claiming sovereignty means there will be less need to protect that sovereignty with weapons. It would also prevent significant shifts in the balance of power on Earth. Nations need not jockey over a strategic position in outer space.

The collectivist approach might also curb some of the economic consequences of exploration. The private development of space promises to not only inflame national tensions, but to also exacerbate income inequality on Earth. Outlawing appropriation may curtail a “first come, first served” mentality where nations quickest to the punch reap the most benefits while other non-space faring nations are left to fend for themselves. An international regime that governs space mining may be able to distribute some of the benefits to developing nations – just as the United Nations International Seabed Authority aims to do when countries mine in international waters.

While there are good reasons to want to preserve outer space as a “common heritage” for all of humanity, it may prove very difficult in practice. Outside of mining, there are many competing (and often incompatible) ideas about how outer space should be used. How, for example, should we resolve the dispute between the Navajo, who strongly oppose leaving human remains on the Moon, and NASA, who recently facilitated a lunar memorial?

On Earth, we use the concept of private property to settle land usage issues. Those who own the land get exclusive right to say what happens on it. Clearly defining who is entitled to what avoids ambiguities. (It also makes private investment easier.) As an alternative, the Artemis Accords call for the establishment of “safety zones” where parties agree to avoid harmful interference. But these proposed zones are problematic for several reasons. Chief among these is that we lack consensus. Not all nations agree about the rules that govern space, and we have limited tools for bringing defectors into compliance.

There are also problems if we look to international deep-sea mining as a test case. The International Seabed Authority is supposed to guarantee equitable sharing of profits of deep-sea mining in international waters. In practice, however, it has been difficult to get such an international framework in place. The United States, for example, refused to ratify the treaty, and the Authority has so far not authorized any commercial mining contracts.

Unless some of the ambiguities involved in applying international law to outer space are resolved, the future looks perilous. Without a clear international order indicating who is entitled to what and how, there remains ample room for misunderstanding and conflict.

“Freedom” Is Good for Democrats, Bad for Democracy

photograph of miniature flags at political rally

The central debate in this presidential campaign was set out at the beginning by the novice pundit Beyonce: “freedom, freedom where are you?” The Democrats have claimed that freedom has always belonged with them and they are finally reclaiming it from the right. Conservatives, of course, countered that freedom is being hijacked from its true home and used to dress up the left’s usual coercive politics. So, who’s right – did the Democrats reclaim or hijack freedom? And, more importantly, should they have done either?

Commentators on both sides agreed on the answer to the first question: it depends on your definition of “freedom.” The philosopher Isaiah Berlin famously distinguished between two kinds of freedom, negative and positive. Negative freedom asks the government only to “mind [its] own damn business,” while positive freedom allows the government to intervene in people’s lives to help them achieve their goals. So, when Democrats used “freedom” to describe their interventionist policies, they were reclaiming a positive definition of freedom to reframe values like safety, equality, and justice as (somewhat tortured) freedoms: the “freedom from fear, violence, and harm,”freedom to live without fear of bigotry and hate,” and the “freedom to learn and acknowledge our true and full history.”

Even as commentators argued over whether Democrats were entitled to use the language of freedom, however, no one questioned whether they should. Messaging gurus declared it a winning strategy based on focus group testing and the broad appeal of freedom. Liberal writers celebrated it as potentially transforming American politics. And even political philosophers were pleased; parsing this one word back into its many meanings is going to keep us relevant for at least a month. In an election that has struggled to elevate the debate above who is “weird” and who is “real,” it might be hard to get worked up about too much philosophy in politics. But, in this case at least, what’s good for philosophers and Democrats is ultimately bad for democracy.

Campaigns are not just contests between messaging gurus. Party platforms are created to win elections, but they also serve the important democratic purpose of providing a clear contrast to the other party so voters can choose the party that speaks to their priorities. To draw a clear contrast, it helps to use different words. Consider the debate over national security after 9/11 or the recent debates on pandemic policy. It is difficult enough to choose between safety and privacy or freedom and health without commercials telling you that privacy is the new safety or true freedom starts with health. And once both sides try to redescribe their priorities in the language of freedom, you end up with a confusing contest between freedom from fear and freedom from surveillance, or the freedom to be healthy versus the freedom of association. These debates are framed for lawyers, not the voting public. And if voters can’t understand what they’re choosing between, then their chosen representatives don’t and can’t represent their will.

Replacing other values with “freedom” also runs counter to the pluralist strand of Democratic politics, which Barack Obama represented in his speech at the DNC. As he often has, Obama talked about the importance of how we treat those who disagree with us – how we listen to and learn from them. On his podcast the next day, Ezra Klein described this strand as valuing how we “make space for them even in disagreement.” Part of making this space is to acknowledge that others’ values are indeed valuable, even if they aren’t what’s most important right now. But when we suggest that ours is the “real” freedom, we don’t leave any space for theirs. For instance, when we frame pandemic policies as sacrificing some amount of freedom to promote public health, we are at least acknowledging the trade-off between legitimate values. If, instead, we call public health the “freedom from viruses,” it suggests that there is not an inevitable trade-off but a way in which we can have both freedom and health. Anyone who thinks otherwise just has the wrong idea about freedom. This insight that there are inevitable trade-offs in politics is another aspect of Isaiah Berlin’s thought – one that did not resurface last week. In Berlin’s world, we often have to choose between good things – safety and privacy, speech and inclusion, freedom and health. If we don’t acknowledge these trade-offs, it becomes difficult to even see the need for a compromise with others, let alone pursue one.

Maybe you’re not a pluralist Democrat, though, and you don’t care whether voters can make a clear choice. Maybe all that matters is that voters choose the right president in November, and we can work out the policy details later. Even then, reframing all Democratic values as freedoms does little to move the public in a progressive direction. The reason that reframing progressive values as freedoms broadens their appeal is that freedom seems to demand less than progressives usually want. For some people, freedom only demands non-interference – no one telling them what to do. For others, freedom demands that they have the option for something, but not necessarily a realistic opportunity, a responsibility, or an entitlement to that thing. As a result, the freedom to vote is not necessarily a demand to make voting easier. The freedom to “get ahead” is not necessarily an equal opportunity to get ahead. And the “freedom to learn and acknowledge our true history” takes a strong stand against banning books, but doesn’t suggest a responsibility to teach our children how race has shaped that history. When these slogans get translated into policy, Democrats may find that freedom is flexible enough to accommodate Republican aspirations, but not quite broad enough for their own.

CRISPR Ethics and the Dilemma of Scientific Progress

Earlier this month, He Jiankui tweeted the following:

For the uninitiated, He, using the gene editing technology CRISPR-Cas9, edited the DNA of embryos for seven couples, with one of these embryos eventually resulting in the birth of twins named Lulu and Nana. His (supposed) goal was to engineer HIV resistance by removing part of a receptor on white blood cells to which the virus is drawn. In other words, he wanted to make it less likely that the twins could be infected with HIV in the future.

After revealing his work, He received widespread condemnation. The Southern University of Science and Technology in Shenzhen, China, where He worked, revoked his contract and terminated all his teaching and research duties. And Julian Savulescu, one of the world’s most prominent bioethicists, said of He’s work, “If true, this experiment is monstrous… This experiment exposes healthy normal children to risks of gene editing for no real necessary benefit. In many other places in the world, this would be illegal punishable by imprisonment.” And indeed, this is what happened, with He being sentenced to 3 years in prison for his actions.

In 2023, roughly a year after his release, He admitted he moved too fast in editing the embryos, and while not acknowledging he did anything wrong, he demonstrated a degree of personal reflection about the impact of his actions. However, more recently, that sense of reflection appears to have faded. Not only is he saying he will release the research that led to the twins’ birth — on the condition that those papers are published in either of the two top-ranking science journals — but in April, He said he was proud of his actions.

Now, undoubtedly, He’s research broke new scientific ground. But it was also ethically unforgivable and highly illegal. Nevertheless, one could argue that, despite its ethically and legally transgressive nature, He’s work might lay the foundation for future advancements that may save and improve lives. After all, CRISPR-Cas9 has been touted as a tool that could revolutionize health. So, with He suggesting he might make his work available, we find ourselves questioning whether we overlook his “monstrous” actions for the potential good they might bring.

This type of conundrum is nothing new. The knowledge used to help inform today’s medical practice often emerges from less-than-reputable sources. Indeed, some of our most basic understanding of anatomy and physiology was born from horrors upon which still make the world shudder. As the BBC notes when reflecting on the 1940s/50s Operation Paperclip:

Allied forces also snapped up other Nazi innovations. Nerve agents such as Tabun and Sarin (which would fuel the development of new insecticides as well as weapons of mass destruction), the antimalarial chloroquine, methadone and methamphetamines, as well as medical research into hypothermia, hypoxia, dehydration and more, were all generated on the back of human experiments in concentration camps.

What was endured during the regime was beyond horrific, but the knowledge those scientists created has changed the world and had both negative and positive impacts. The phenomenon of knowledge, some of it invaluable, coming from the most terrible acts and actors, has been spotlighted in the series The Human Subject. Here, Drs. Adam Rutherford and Julia Shaw explore the connections between modern medicine and its terrible beginnings (not all historical).

What is being asked of us then, with He proposing to release his work into a peer-reviewed journal and the enormous good that might result from it, is if we can overlook or accept the immoral and illegal way the breakthrough was made (to use a cliché) for the greater good.

To be clear, however, this isn’t a case of a cost-benefit analysis, at least not in a straightforward way. The cost — the harm that might come from the experiment — has already been endured. It’s already happened. So, we’re not balancing doing an action against not doing an action. Instead, we need to consider whether the practical benefits of using this knowledge are so tremendous that it causes us to abandon our principles. While He’s actions might have been monstrous, though he may have shortened Lulu and Nana’s lives and his genetic manipulation might have had unforeseen off-target effects, can we overlook these facts given the impact He’s work might have in helping people in the future?

Unfortunately, I don’t have a good answer for you (sorry if you were expecting one).

On the one hand, if the proponents of CRISPR are to be believed, then the enormous benefits that might come from a better understanding of genetic alteration cannot be passed up, regardless of how that knowledge came into being. Countless debilitating genetic diseases, from sickle cell anaemia to cystic fibrosis, could be effectively tackled through CRISPR’s application. Thus, it doesn’t seem reasonable to turn away from such a possibility because we don’t like how we obtain such knowledge.

On the other hand, however, I’m sure many would feel an intense moral distaste for using such knowledge when we know how it has, at its very foundation, been tainted. How we come to know something is just as important as the thing itself. And while good may come from He’s work, it seems nothing more than opportunistic to opt for the outcome that might benefit us now and in the future and write off Lulu and Nana as a price worth paying.

Ultimately, then, I’m hesitant to give He a pass. He knowingly breached all legal and ethical conventions, and to say that this matters less than the utility of what he uncovered sends a signal to him and others that success matters more than what’s right.

Mergers, Monopolies, and Workers

photograph of Kroger's headquarters

In October of 2022, Kroger and Albertsons announced plans for a 24.6 billion dollar merger. This process was interrupted by the US Federal Trade Commission (FTC) and several states suing to halt the merger earlier this year. Consolidation of these two supermarket giants, they claim, will result in higher prices and hurt workers. The issue is now having its day in court.

The federal challenge continues the Biden administration’s muscular use of antitrust law to limit the consolidation of economic power in the hands of the few. It is, however, a marked departure from a more lax “consumer welfare” approach which has been ascendant since the pro-corporation Reagan administration.

Typically, our concerns about monopolies focus almost exclusively on consumers. How will fewer providers affect prices? The fear is that a corporation with a massive market share will use its power to jack up the prices of goods and services. But what about workers? We should expect workers to be one of the most affected populations in a merger. From a consumer perspective, even if the merger lowers prices (and most don’t), we’re talking about a small price decrease on some goods. Meanwhile, the effect on employees can be far more decisive, either through job loss, pay cuts, or increasing work demands.

First, there are so-called monopsony concerns. A monopoly is a situation with just one seller, by contrast in a monopsony there is just one buyer (or employer in this case). When monopsony occurs, a corporation may be able to lower wages because employees do not have an alternative. Additionally, mergers may lead to restructuring or the elimination of duplicate personnel. Finally, larger corporations may simply have more power to push back against unions or other forms of labor organizing as well as influence labor law and worker protections. This is not to say mergers can never benefit workers – the most obvious example is workers at a company that would otherwise go out of business – but it is uncommon.

And yet, until recently, worker considerations were largely absent from discussions of mergers. How did we get here?

American antitrust law began as a reaction to the industrial excesses of the Gilded Age, first through the 1890 Sherman Antitrust Act and then with the more exhaustive 1914 Clayton Antitrust Act and 1914 Federal Trade Commission Act. All were passed with overwhelming legislative support. Courts were granted wide latitude in evaluating monopolistic and anti-competitive practices, but there was a general suspicion of market consolidation and giant corporations. Supreme Court Justice Louis Brandeis, a legendary critic of monopolies, railed against the “curse of bigness.” His concerns were far reaching — yes, economic effects on price, but also the anti-democratic effects of wealth inequality, and power over workers.

Early antitrust thinking laid out a smorgasbord of considerations regarding competition, corporate power, the protection of small business, the price and quality of goods, and worker well-being. Especially central was whether a merger was seen as pro-competitive or anti-competitive. While American antitrust practices shifted with the broader political and economic winds, it is generally viewed as in the 1970s when a decisive change occurred. Although historians quibble about the finer details, it’s widely thought that the legal scholar Robert Bork served as a ferryman for the more anti-regulatory Chicago School of Economics helping bring the “consumer welfare” standard to American antitrust law.

Bork and others argued that the focus should not be on competition as such, but rather on broader considerations of economic welfare. The consumer welfare standard condenses the evaluation of mergers down to a single question: Does this merger help or hurt the consumer? Usually the primary consideration for consumer welfare is price, but one might also consider such factors as quality or product innovation. Bork also emphasized the efficiency gains that can result from mergers, e.g., by eliminating redundant infrastructure or increasing bulk buying.

Some critics, however, point out, that Bork got the terminology wrong and was actually advocating for what’s called the total welfare standard. What’s the difference? A consumer welfare standard cares specifically about consumers, so it’s only interested in efficiency gains if they are passed on to consumers, e.g., through lower prices. A total welfare standard is interested in both consumers and producers (where the producer is the corporation, not the workers). So a corporation saving money is seen to tell in favor of a merger on the total welfare standard.

What is there to say about these welfare standards? For one, the scope of interest is narrow. Bork and others of a similar mind focus only on consumer welfare and use quite a narrow understanding of welfare at that. It might be objected that an advantage of the “consumer welfare”/”total welfare” approaches is that they are more tractable, i.e., easier to do. However, tractability defenses become less and less compelling the further an approach is from our goals, so tractability alone cannot justify the approach. Someone like Justice Brandeis was broadly interested in questions of the distribution of wealth and power in society. For him, the aim of antitrust is to tackle the social, economic, and political effects of corporate power and monopolies. Bork’s approach, then, would simply miss the point. Workers are left out of the “welfare” discussion almost entirely. In fact, from the total welfare standard, if a company lays off a bunch of workers performing duplicate functions this justifies the merger as efficiency gains.

The Biden administration represents a break from narrow welfare standards and embrace of the so-called New Brandeisians. They are still decidedly pro-market, but believe considerations of corporate power and the more aggressive use of antitrust law are necessary to ensure the market functions to public (and worker) benefit. Kroger and Albertsons is a case in point.

Crucially, the shift should not be seen as a strictly economic one. Ultimately, it is about our values. Economists can help us understand the effect of mergers in different contexts, but they cannot tell us what social and economic effects we want. Likewise, while there are complex scientific questions about which antitrust laws and policies best realize our social, political, and economic goals, first we need to seriously consider what those goals are. Are we worried about consumer prices? About corporate power?  About worker well-being? All of the above?

Should You Bully a Nazi?

close up photograph of couch cushions

People don’t like JD Vance. His memoir overstated his Appalachian identity and negatively stereotyped the region. He went from being a “Never Trump guy” and calling Trump “America’s Hitler” to joining the Trump ticket. He insulted his opponents as being “childless cat ladies” and disparaged Harris for having no biological children of her own.

Each of these complaints has made the rounds on the meme machine circuit, but none has gone nearly as viral as the claim that JD Vance admitted to having an intimate encounter with a couch. Tim Walz and others have even referenced it obliquely in speeches.

Not only is the allegation false, but it’s also not the reason why people don’t like Vance. Instead, it’s become this overall aesthetic descriptor that seems to capture the vibes of all the critiques combined. Most meme sharers seem to know it’s not true and have still chosen to pass it along. This appears like a form of bullying.

Even if the power dynamics are different and insulate Vance from many real-world consequences, the basic structure of this situation is that the outrageous couch insult is being used to demoralize, denigrate, and beat down Vance as a political candidate. It’s personal.

Grant the following things for the sake of argument: 1) It’s plausible to call Vance and Trump fascists; Vance himself has made similar allegations about Trump. 2) Sharing the couch memes does amount to bullying.

Is this morally permissible? It feels sus.

Like the “punch a Nazi” strategy popular at the beginning of Trump’s first term, the tactic seems to be effective on several counts:

-It makes Vance look weak and weird, which is antithetical to his core brand. By making Vance look less threatening, it shifts the narrative away from framing him as scary, dangerous, and powerful, which directly denies him social power and damages his reputation among both supporters and opponents.

-It replaces the serious conversations about the real critiques of Vance (which Vance himself does not seem to care about) with an unserious insult (that Vance does care about). This reverses a common bad faith dynamic and closes off an endless debate about Vance’s moral character.

-It targets Vance specifically instead of Vance supporters, which avoids alienating broad swaths of the population. But it also reflects negatively on Trump, who likely picked Vance to mirror his strong man aesthetic.

If we’re just reasoning based on consequences, and those consequences are that fascists lose power and social clout, then the couch memes are likely morally permissible (if not obligatory). And that seems true even considering the sizeable group of people being misled by the memes into thinking the the event actually took place.

If we’re thinking about moral rules that should hold the same for everyone, then a principle like “you should never bully” or “you should never lie” would forbid spreading this false meme. There is likely some real harm done to JD Vance’s psyche and to others who fear being similarly falsely maligned.

But the approach I would like to take to the question is more holistic: What is this strategy trying to accomplish? Does it require dehumanizing Vance? Does it feed conspiratorial thinking and a reductive “own the repubs” mentality? (That, I’ll admit, doesn’t sound nearly as compelling as its opposite.)

Here’s the thing about tools and tactics: they’re often great to use in some situations, and not in others. If the couch memes are narrowly deployed to only target Vance, don’t displace the possibility of serious, good faith conversation, and represent only one of a number of tactics to shut down fascist behavior and talking points, then they may be distasteful but not emblematic of some larger pathology.

But if the couch memes are instead part of a general disregard for Vance’s life and a desire to seek revenge on MAGA conservatives at every opportunity, with the hope of completely excluding all of them from public life and with no regard for truth, then we have a real problem.

I suspect that both general approaches (and a number of other approaches around and between) are at play. No one common psychology informs the meme’s spread. They also likely caught on because they are so distasteful and eye-catching, much more so than the similar “weird” insult thrown out earlier in the election. Much of the current strategy of the DNC, official and unofficial, seems to be to try to convince Conservative voters that their leaders aren’t worthy. Some of these efforts are, I think, morally permissible, such as the musical remixes of Vance’s anti-Trump comments.

The couch meme, by contrast, is morally wrong. It is a proxy response to legitimate critiques of Vance, but it is false and defamatory. It does not directly respond to those critiques but instead uses unrelated shaming tactics to beat Vance into submission. It mirrors Trump’s bullying campaigns against other politicians such as Ron DeSantis (who, like Vance, is not especially sympathetic as a character).

At the same time, there are decidedly much worse forms of internet bullying and much more egregious campaign tactics that are fully outside of the bounds of democratic  process. We shouldn’t get so caught up on the morality of the couch memes that we forget the bigger picture.

I hope that this unserious and absurd meme will eventually bring us back to being able to have serious, respectful policy discussions about where we want the country to go in the future. Maybe we can finally talk about how to solve affordable housing. If we can prevent Trump and Vance from taking power and abusing the recent Supreme Court decision, then maybe we can get back to a more stable form of democratic exchange, with civil presidential debates and thoughtful consideration for our neighbors.

There is probably no perfect tactic to push back against a candidate who is dramatically trying to undermine the American Constitution. While the couch memes are certainly morally mixed, they are likely preferable to other more violent exchanges, and a less aggressive tactic like the “weird” insults might be less successful.

Let’s collectively take the imperfections of this moment to move towards a better future, without forgetting the humanity of our fellow Americans.