← Return to search results
Back to Prindle Institute

Mergers, Monopolies, and Workers

photograph of Kroger's headquarters

In October of 2022, Kroger and Albertsons announced plans for a 24.6 billion dollar merger. This process was interrupted by the US Federal Trade Commission (FTC) and several states suing to halt the merger earlier this year. Consolidation of these two supermarket giants, they claim, will result in higher prices and hurt workers. The issue is now having its day in court.

The federal challenge continues the Biden administration’s muscular use of antitrust law to limit the consolidation of economic power in the hands of the few. It is, however, a marked departure from a more lax “consumer welfare” approach which has been ascendant since the pro-corporation Reagan administration.

Typically, our concerns about monopolies focus almost exclusively on consumers. How will fewer providers affect prices? The fear is that a corporation with a massive market share will use its power to jack up the prices of goods and services. But what about workers? We should expect workers to be one of the most affected populations in a merger. From a consumer perspective, even if the merger lowers prices (and most don’t), we’re talking about a small price decrease on some goods. Meanwhile, the effect on employees can be far more decisive, either through job loss, pay cuts, or increasing work demands.

First, there are so-called monopsony concerns. A monopoly is a situation with just one seller, by contrast in a monopsony there is just one buyer (or employer in this case). When monopsony occurs, a corporation may be able to lower wages because employees do not have an alternative. Additionally, mergers may lead to restructuring or the elimination of duplicate personnel. Finally, larger corporations may simply have more power to push back against unions or other forms of labor organizing as well as influence labor law and worker protections. This is not to say mergers can never benefit workers – the most obvious example is workers at a company that would otherwise go out of business – but it is uncommon.

And yet, until recently, worker considerations were largely absent from discussions of mergers. How did we get here?

American antitrust law began as a reaction to the industrial excesses of the Gilded Age, first through the 1890 Sherman Antitrust Act and then with the more exhaustive 1914 Clayton Antitrust Act and 1914 Federal Trade Commission Act. All were passed with overwhelming legislative support. Courts were granted wide latitude in evaluating monopolistic and anti-competitive practices, but there was a general suspicion of market consolidation and giant corporations. Supreme Court Justice Louis Brandeis, a legendary critic of monopolies, railed against the “curse of bigness.” His concerns were far reaching — yes, economic effects on price, but also the anti-democratic effects of wealth inequality, and power over workers.

Early antitrust thinking laid out a smorgasbord of considerations regarding competition, corporate power, the protection of small business, the price and quality of goods, and worker well-being. Especially central was whether a merger was seen as pro-competitive or anti-competitive. While American antitrust practices shifted with the broader political and economic winds, it is generally viewed as in the 1970s when a decisive change occurred. Although historians quibble about the finer details, it’s widely thought that the legal scholar Robert Bork served as a ferryman for the more anti-regulatory Chicago School of Economics helping bring the “consumer welfare” standard to American antitrust law.

Bork and others argued that the focus should not be on competition as such, but rather on broader considerations of economic welfare. The consumer welfare standard condenses the evaluation of mergers down to a single question: Does this merger help or hurt the consumer? Usually the primary consideration for consumer welfare is price, but one might also consider such factors as quality or product innovation. Bork also emphasized the efficiency gains that can result from mergers, e.g., by eliminating redundant infrastructure or increasing bulk buying.

Some critics, however, point out, that Bork got the terminology wrong and was actually advocating for what’s called the total welfare standard. What’s the difference? A consumer welfare standard cares specifically about consumers, so it’s only interested in efficiency gains if they are passed on to consumers, e.g., through lower prices. A total welfare standard is interested in both consumers and producers (where the producer is the corporation, not the workers). So a corporation saving money is seen to tell in favor of a merger on the total welfare standard.

What is there to say about these welfare standards? For one, the scope of interest is narrow. Bork and others of a similar mind focus only on consumer welfare and use quite a narrow understanding of welfare at that. It might be objected that an advantage of the “consumer welfare”/”total welfare” approaches is that they are more tractable, i.e., easier to do. However, tractability defenses become less and less compelling the further an approach is from our goals, so tractability alone cannot justify the approach. Someone like Justice Brandeis was broadly interested in questions of the distribution of wealth and power in society. For him, the aim of antitrust is to tackle the social, economic, and political effects of corporate power and monopolies. Bork’s approach, then, would simply miss the point. Workers are left out of the “welfare” discussion almost entirely. In fact, from the total welfare standard, if a company lays off a bunch of workers performing duplicate functions this justifies the merger as efficiency gains.

The Biden administration represents a break from narrow welfare standards and embrace of the so-called New Brandeisians. They are still decidedly pro-market, but believe considerations of corporate power and the more aggressive use of antitrust law are necessary to ensure the market functions to public (and worker) benefit. Kroger and Albertsons is a case in point.

Crucially, the shift should not be seen as a strictly economic one. Ultimately, it is about our values. Economists can help us understand the effect of mergers in different contexts, but they cannot tell us what social and economic effects we want. Likewise, while there are complex scientific questions about which antitrust laws and policies best realize our social, political, and economic goals, first we need to seriously consider what those goals are. Are we worried about consumer prices? About corporate power?  About worker well-being? All of the above?

Should Corporations Be Democratic?

photograph of chairs in a boardroom

It can be easy to forget what a corporation essentially is. We may all have preconceived notions about what a corporation should do, but, ultimately (and historically), a corporation is simply a group of people engaged in some kind of enterprise that are legally recognized as a distinctive entity. It is common enough to think that a corporation should primarily focus on maximizing its returns for shareholders, but obviously the business world is more complicated than the narrow view of earnings. We saw this last week with Elon Musk highlighting that he is willing to prioritize political principles over profits. A similar question was raised regarding the future of OpenAI when its board of directors was forced to rehire its CEO after employees threatened to leave. This tension between profits and principles gives way to an interesting question: Should corporations be democratic – should employees have a direct say over how a corporation is run? Is this likely to result in more ethical corporations?

There is no fixed notion of what a corporation is for or how it is supposed to be run. Typically, a corporation is governed by a board of directors who are elected by shareholders and who appoint a CEO to run the company on their behalf. Typically, this creates a structure where executives and managers have a clear fiduciary responsibility to act in the interests of shareholders. This can create tension within a corporation between management and labor. Whereas the owners and managers seek to maximize profit, employees seek higher wages, greater benefits, and job security. But would it be better if the gap between workers and owners were erased, and employees had direct democratic influence on the company?

Industrial democracy is an arrangement where employees are capable of making decisions, sharing responsibility, and exercising some authority in the workplace. Giving employees a democratic say can be done in many ways. A company could include employees in a trust where that trust controls the corporation’s shares and thus the board of directors. Another way employees can engage in democratic control over a corporation is by establishing a cooperative where every employee owns a share. Finally, a third form of industrial democracy takes the form of “codetermination” such as is practiced in Germany where any corporation with more than 1,000 employees must be given representation on the corporation’s board of directors.

Defenders of industrial democracy argue that the practice would ensure employees have a more meaningful influence on their workplace, giving them greater say over how corporations are run and what their wages will be. They further argue that this can lead to higher wages, less turnover, greater income equality, and greater productivity. For example, Mondragon – one of the largest worker cooperatives headquarters in Spain – allows their employees to vote on important corporate policies. They have, for instance, enacted regulations that require that wages for executives be no capped at a certain ratio compared to those making the lowest wage.

Some have argued that efforts to democratize the workplace help to offset the lost influence and bargaining power that’s come from the demise of unions. It’s possible that some forms of industrial democracy could be an alternative to unions where rather than creating a countervailing power to management, individual employees can enjoy a seat at the table. In this way, employees might become more invested in the company, and also potentially prevent disputes and conflicts between employees and management.

Supporters of industrial democracy argue that reforming the corporate governance system is necessary because the standard model contributes to economic underperformance, poor social outcomes, and political instability. Giving employees more say can make them more enthusiastic about the company. It may also be especially desirable in a case like OpenAI’s where there is concern about how the AI industry is regulating itself. Greater employee say could even encourage greater accountability.

On the other hand, critics worry that increasing democracy in the workplace, particularly if required by law, represents an unwarranted interference with business practice. Obviously, people on different sides of the political spectrum will differ when it comes to regulations mandating such practices, but there may be a more fundamental ethical argument against granting employees representation on a board of directors: they don’t deserve it. The argument that investors should have representation follows from the idea that they are choosing to risk their own funds to finance the company. Given their investment, the board and corporate executives have the right to dictate the company’s direction.

In the abstract, critics of industrial democracy argue that it will undermine the business ultimately because employees will be incentivized to work to their own self-interest rather than what is best for the company. However, in practice perhaps one reason for not adopting industrial democracy is that it doesn’t necessarily have a significant impact on business practices in areas where it is practiced.

A 2021 study from MIT found “the evidence indicates that … codetermination is neither a panacea for all problems faced by 21st century workers, nor a destructive institution that appeals obviously inferior to shareholder control. Rather, it is a moderate institution with nonexistent or small positive effects.” When employees have a stake in the company, they tend to behave more like shareholders. If their bottom lines are affected, there is no reason to think that an employee-controlled corporation will be more inclined to act more ethically or more cutthroat.

In other words, the greatest argument against increasing workers’ say in the workplace may be that it simply won’t have the positive effects on corporate governance that might be hoped for. Even with direct representation on the board, that does not mean that this will equate to any kind of direct control over the corporation’s practices, particularly if they are in the minority. Union supporters have long argued that rebuilding union strength is a superior alternative to increasing employee representation.

It is worth pointing out, however, that even if workers don’t invest money directly in a business, they do invest their time and risk their livelihood. Perhaps this fact alone means employees deserve greater voice in the workplace. While representation on the board of directors won’t necessarily lead to dramatic change, numerous studies have shown that it can supplement union and other activities in increasing employee bargaining power. Thus, while it may not be a full solution to improving corporate governance, it might be a start.

“Technological Unemployment” and the Writers’ Strike

photograph of writers' strike sign

On September 27th the monthslong writers’ strike concluded. Writers worried that ChatGPT and similar text generation technologies will lead to a future in which scripts are no longer written by humans, but manufactured by machines. Certainly corporations find the technology promising, with Disney, Netflix, and other major entertainment companies vacuuming up AI specialists. Holding signs saying “AI? More like AI-Yi-Yi!” and expressing similar sentiments, writers fought to secure their place in a rapidly changing technological landscape. And when the smoke cleared and the dust settled, the Writers Guild of America had won a surprisingly strong contract. While it does not prohibit the use of AI, it does ensure that human writers will not become the mere handmaidens of computerized text generators – editing and refining machine-generated content.

From the flood of garbage ChatGPT has sent into the internet to the multiplying complexities of intellectual property, Artificial Intelligence is, in many ways, a distinctly modern challenge. But it also follows a well-worn pattern: it continues a long legacy (and revives an old debate) regarding the labor market’s ability to absorb the impact of new technology.

John Maynard Keynes, the famous British economist, used the phrase “technological unemployment” to describe the mismatch in how quickly human labor could be replaced by technological innovation versus how quickly new uses for human labor emerged. For Keynes, this was essentially a lag-time problem caused by rapid technological shifts, and it remains controversial whether “technological unemployment” causes an overall drop in the employment rate or just a momentary hiccup. Regardless, for workers who lose their jobs due to the adoption of new technology, whether jobs are being created just as fast in some other corner of the economy is rather beside the point. Because of this, workers are often anxious, even adversarial, when marked technological change makes an appearance at their workplace.

The most famous example is the Luddites, the British machine-smashing protestors of the early 1800s. With some textile manufacturers all too willing to use new technologies such as the mechanized loom to replace and undercut skilled laborers, workers responded by destroying these machines.

“Luddite” has since become a term to describe a broader resistance to (or ignorance of) technology. But the explanation that workers resistant to their employers adopting new technologies are simply anti-technology or gumming up the gears of progress out of self-interest is too simplistic. Has “progress” occurred just because there is a new product on the market?

New technology can have disparate effects on society, and few would assert that AI, social media, and smartphones deliver nothing but benefits. Even in cases where technological innovation improves the quality or eases the production of a particular good, it can be debatable whether meaningful societal progress has occurred. Companies are incentivized to simply pocket savings rather than passing on the benefits of technological advancement to their employees or customers. This represents “progress,” then, only if we measure according to shareholder value or executive compensation. Ultimately, whether technological advance produces societal progress depends on which particular technology we’re talking about. Lurking in the background are questions of who benefits and who gets to decide.

In part, the writers’ strike was over just this set of questions. Entertainment companies no doubt believe that they can cut labor costs and benefit their bottom line. Writers, however, can also benefit from this technology, using AI for editing and other purposes. It is the writers’ assertion that they need to be part of the conversation about how this technology – which affects their lives acutely – should be deployed, as opposed to a decision made unilaterally by company leadership. But rather than looking to ban or destroy this new technology, the writers were simply demanding guardrails to protect against exploitation.

In the same 1930 essay where he discussed technological unemployment –  “Economic Possibilities for our Grandchildren” – Keynes raised the hope of a 3-hour workday. The economist watched the startling increases in efficiency of the early 20th century and posited, naturally enough, that a glut of leisure time would soon be upon us. Why exactly this failed to materialize is contentious, but it is clear that workers, in neither leisure time nor pay, have been the prime beneficiaries of productivity gains.

As Matthew Silk observed recently in The Prindle Post, many concerns about technology, and especially AI, stem not from the technology itself but from the benefits being brought to just a few unaccountable people. Even if using AI to generate text instead of paying for writers could save Netflix an enormous amount of money, the bulk of the benefits would ultimately accrue to a relatively small number of corporate executives and major shareholders. Most of us would, at best, get more content at a slightly cheaper price. Netflix’s writers, of course, lose their jobs entirely.

One take on this is that it is still good for companies to be able to adopt new technologies unfettered by their workers or government regulations. For while it’s true that the writers themselves are on the losing end, if we simply crunch the numbers, perhaps shareholder gains and savings to consumers outweigh the firing of a few thousand writers. Alternatively, though, one might argue that even if there is a net societal benefit in terms of resources, this is swamped by harms associated with inequality; that there are attendant problems with a deeply unequal society – such as people being marginalized from the democratic political processes – not adequately compensated for merely by access to ever-cheaper entertainments.

To conclude, let us accept, for the sake of argument, that companies should be free to adopt essentially whatever technologies they wish. What should then be done for the victims of technological unemployment? Society may have a pat response blaming art majors and gender studies PhDs for their career struggles, but what about the experienced writing professional who loses their job when their employers decide to replace them with a large language model?

Even on the most hard-nosed analysis, technological unemployment is ultimately bad luck. (The only alternative is to claim that workers are expected to predict and adjust for all major technological changes in the labor market.) And many philosophers argue that society has at least some duty to help those suffering from things beyond their control. From this perspective, unemployment caused by rapid technological change should be treated more like disaster response and preparedness. It is either addressed after the fact with a constructive response like robust unemployment insurance and assistance getting a new job, or addressed pre-emptively through something like universal basic income (a possibility recently discussed by The Prindle Post’s Laura Siscoe).

Whatever your ethical leanings, the writers’ strike has important implications for any livelihood.

Workplace Autocracy

image of businessman breaking rocks in a quarry

The popular fast-food restaurant Chipotle has agreed to pay over 300,000 dollars for alleged violations of Washington D.C. child labor laws. Similar violations of child labor laws have occurred at several Sonics in South Carolina – the same state where just a few months earlier a labor contractor was fined over $500,000 for exploitation of migrant farm workers. In fields like finance and tech, where child labor is rare, there are other concerning employer practices, such as extensive digital surveillance of employee computers.

These stories drive home that largely two groups of people decide what happens in the American workplace: The first, management; the second, government. Americans love democracy, but we leave this particular leadership preference at home when we go to work.

Workplaces are often explicitly hierarchical, and workers do not get to choose their bosses. Beyond pay, employers may regulate when we eat, what we wear, what hours we get to spend with family, and whether we have access to certain contraceptives. Worker choice usually boils down to a binary decision: do I take the job or do I leave it. And concerns of money, family, and health insurance often put their thumb on the scale of this ostensibly personal decision.

A good boss may listen to their employees, or allow them significant independence, but this discretion is ultimately something loaned out, never truly resting with the employees. In some cases, either through their own position or through joining a collective organization like a union, a worker may have more leverage in negotiations with their employer, and thus more say in the workplace. Robust workers’ rights and worker protection laws can further strengthen employee agency and negotiating position. All these however preserve an autocratic understanding of the workplace – the power of management is limited only by countermanding power from the government, organized labor, or individual workers.

The philosopher Elizabeth Anderson has referred to the incredible power that employers have over the lives of their employees as “private government.” And yet oddly, Anderson argues, one rarely hears the same concerns voiced about government as about the governing power of management. We fret over government-imposed regulation on companies, but rarely company-imposed regulations on employees. We want our political leadership accountable to those they rule over, but not company leadership. We worry about government surveillance, but not internal workplace surveillance.

What might justify this seemingly incongruous state of affairs?

One answer is to appeal to human nature. We often hear that humans tend to be selfish and hierarchical, and therefore shouldn’t be surprised that our companies end up like this. But this response faces problems on several fronts. First, at best this would explain the autocratic workplace, it would not provide a moral justification for it. Philosophers generally reject that something is good simply because it is natural. Second, biologists dispute this simplistic account of human nature. Humans are highly responsive to social context, and our underlying tendencies seem to be just as cooperative as they are competitive. Finally, it fails to explain why we are concerned by public government overreach but not private government overreach.

Alternatively, we may argue that because the firm is ultimately owned by someone, it is only appropriate that person have control over their employees as long as those employees freely entered into agreement to work there. However, with pay and healthcare on the line and the generally skewed power between employers and employees, only the rarest of employees get to truly negotiate terms with their employer. Many undesirable features of a workplace such as digital surveillance, mandatory arbitration, and noncompete clauses (see Prindle Post coverage) are so widespread as to be inescapable in certain industries. Consequently, it is specious to argue employers are entitled to treat employees as they see fit, simply because employees agreed to work there – although this general argument would hold up better with stronger workers’ rights.

The final, and perhaps the most obvious, answer is simple efficiency. A more autocratic workplace may just work much better from an economic standpoint.

There are two things to note with this line of defense. First, it needs to have limits to preserve human rights and dignity. The American government undoubtedly finds the prohibition against unreasonable searches and seizures inefficient at times, but that does not justify scrapping it. By the same token, a company would presumably not be justified in whipping their employees, no matter how much it increased productivity. Limits need to be set on the power companies have over their workers – indeed, as the government does to some extent currently. However, this does not necessarily speak against workplace autocracy tout court. Second is that the efficiency benefit of workplace autocracy depends on what it is being compared to. Convening an assembly of thousands of workers to make every decision is perhaps insufficiently “agile” for the modern economy, but there are many examples of businesses owned and run cooperatively by their employees. Even something as simple as allowing employees to vote out management could be worth considering. Workplaces may even benefit from more empowered and invested employees.

For Anderson, herself, the current power of private government is best explained not by reasons, but by history. The rise of an obsession with efficiency, burgeoning faith in the liberatory power of free markets, and the collapse of organized labor in America have all conspired to help managerial power in the workplace to grow unquestioned. Agree or disagree with her history, we can follow her in thinking that, like public government, private government is sufficiently impactful and complex to merit robust discussion about its justifications, dangers, preferred implementation, and needed checks and balances.

Glacier Northwest v Teamsters: Employer Property and Worker Rights

photograph of construction workers on break at job site

Tensions had been simmering between Glacier Northwest and its employees. On August 17th, 2017, drivers for the company loaded up their cement-mixing trucks with freshly made cement and drove off for delivery. Mid-delivery, a strike was called and rather than deliver cement as expected, 16 of the truck drivers drove back to the yard. They kept the drums running, left their vehicles, and the strike was on. Seven of the 16 drivers expressly notified management they would be returning to the yard with loaded trucks. While none of the trucks were damaged, the cement was wasted and Glacier Northwest sued Teamster Local 174, their employees’ union, for $11,000 dollars in a Washington court. The case ultimately made it before the Supreme Court.

This seemingly minor lawsuit hinges on the question of whether labor disputes should be handled by the National Labor Relations Board or state courts. The NLRB has expertise on labor matters and is generally viewed as more supportive of workers than many state courts  — although this somewhat depends on the president, as the Board members are presidential appointees. But, on June 1st, the Supreme Court released its decision in Glacier Northwest, Inc. v Teamsters, placing the case back in the hands of the Washington State Supreme Court, with their blessing to continue the lawsuit in state court.

This complicated decision in a complicated case continues the trend of the current court ruling in opposition to organized labor, most notably in Janus v. AFSCME. (Although as many commentators have noted, the Glacier Northwest ruling falls short of the most anti-labor ruling that could have come out of the case – one which workers’ rights advocates worried would unleash a flood of state-level litigation in response to strikes.) As stands, it still provides a path for employers to sue their employees for damages in state court if they fail to take reasonable precautions to protect their employer’s property from foreseeable and imminent harm caused by a strike. How wide or narrow this path is remains unclear.

Lost in the tortuous proceduralism of Glacier Northwest v. Teamsters is a more fundamental ethical question: how should employer’s property rights be weighed against workers’ right to organize and strike?

An extreme perspective that strongly prioritizes property would be that workers are “allowed” to organize and even strike (as in, such actions would be legal), but they are on the hook for any losses and property damage that results. The consequences of striking would be enormously onerous to workers, and employers would be subject to legal remedy to “reverse” harms caused by a strike. Under such a standard, strikes would be both risky and less impactful. The takeaway is that for the legal right to strike to be meaningful, workers must have some protection from the economic damages caused by their actions.

One somewhat legalistic approach essentially dodges the question of balancing property rights and workers’ rights. The idea here is that striking workers are simply not working, and therefore owe nothing more to the company than if they were out sick or a random person on the street. It follows then that striking workers are not responsible for the harms that result from a strike, as long as those harms result directly from stopping working. If grocery store workers go on strike, and in addition to a bunch of lost sales all the fruits and vegetables spoil, they would owe nothing to the company. Why should this be their responsibility any more than the responsibility of an employee that stayed home sick?

What workers cannot do under this perspective is take any steps to cause property damage to their employer beyond damage that results from not working. Workers cannot go on strike, grab torches, and burn down the factory.

This is in fact very close to how the National Labor Relations Board views damages that result from strike. It can quickly get deep in the weeds. (Which Justice Jackson incidentally raised as an argument to let the National Labor Relations Board do it rather than state courts.) What about situations that are strategically timed or contrived to produce maximum damage? Does it matter whether the situation is contrived as opposed to merely carefully timed? Could workers walk off the job partway through a dangerous smelting process and let the factory burn down? Or, as came up frequently during oral argument before the Supreme Court, could ship workers sail a boat into the middle of the river and then abandon ship?

The National Labor Relations Board’s solution – and this language is reflected in Glacier Northwest v. Teamsters – was to say employees must take “reasonable precautions” to avoid these kinds of serious harms to persons and property. This leads to its own complex discussion of what counts as reasonable precautions, and to which specific harms they should apply.

A different approach would be to say that the balance between property rights and workers’ rights is simply the wrong issue. What we should care about is not these ostensibly competing sets of rights, but rather the balance of power in the workplace. The idea here is that workers’ rights and protections don’t exist on some definitive list, but in relation with what is required for workers to have agency at work. The philosopher Elizabeth Anderson has noted that uses (and abuses) of power are allowed in the workplace which we would find intolerable from the government. The question then becomes what tools do workers need in the workplace. (To Congress’s credit, a goal of equal bargaining power was explicitly included in the National Labor Relations Act.)

Strikes are large and flashy and can lead to a myopic analysis where the harms of striking are on full display, but the harms that made the strike necessary – from underpayment, to bad-faith bargaining, to poor working conditions, to undignified treatment – are invisible. Workers may not have direct legal redress for these harms. Instead the assumption is that the right to organize and strike provides workers tools to resolve these harms on their own.

Crucially, the intent of a strike is not primarily to cause economic damage to the employer. The intent of a strike is to bring the employer to the bargaining table and achieve certain goals in the workplace. The leverage of a strike is that it causes economic or reputational harms to the employer. What matters is that companies know their employees are allowed to engage in strategically timed strikes. When workers are generally weak in comparison to their employers, as is the case in modern America, it may make ethical sense to be generous with what strike tactics are allowed, not to hurt employers, but to provide employees with the tools to negotiate with their employers as equals and prevent harms in the workplace.

A final, more human-centric approach would be to simply say that workers’ rights should have priority over their employer’s property rights. The thrust of the approach is that worker agency, dignity, and self-determination are more important than employer property. This intuition is heightened in cases with larger corporate employers where no one suffers major personal harm associated with the loss of company profits or property. The intuition becomes ethically grayer for small, closely held businesses and more extreme property loss. Note though that the consideration here is the human harms associated with property loss, not the property loss as such.

On this approach, the primary reason a strike should not be strategically timed to burn down a factory or leave a boat stranded in a river is because such an action endangers people, not property. (This leads to its own balancing question on strikes that can cause public harm, such as nursing strikes, as has been previously discussed in the Prindle Post.) Placing workers’ rights over property could still lead to legal complexity in individual cases, but it would set clear priorities. Courts should protect the agency and dignity of workers.

Right-to-Work Laws and Workers’ Rights

photograph of worker's tools arranged with US flag background

Once a bastion of organized labor, Michigan has had a controversial right-to-work law on the books since 2012. On Tuesday, March 14th, the Michigan Senate approved a bill that would repeal it. Democratic Governor Gretchen Whitmer has already stated her intent to sign. With surging union approval ratings, some labor supporters cautiously hope this could signal broader pushback against the decades-long right-to-work initiative.

But what exactly are right-to-work laws, what case can be made for them, and why are they opposed by unions which generally support workers’ rights?

Right-to-work laws bear little relationship to a more colloquial understanding of the right to work as the right to seek and engage in productive employment. The term comes from  a 1941 editorial by editor of The Dallas Morning News, William Ruggles. Ruggles’s “right to work” was the right to not have to join a union as a condition of employment. His ideal was spun into a multi-state campaign by the corporate lobbyist turned right-wing political activist Vance Muse. This is still generally what right-to-work means in the United States.

In the words of the National Right To Work Legal Defense Foundation:

The Right to Work principle–the guiding concept of the National Right to Work Legal Defense Foundation–affirms the right of every American to work for a living without being compelled to belong to a union. Compulsory unionism in any form–“union,” “closed,” or “agency” shop–is a contradiction of the Right to Work principle and the fundamental human right that the principle represents. 

More precisely, right-to-work laws regulate the kinds of agreements that can be made between unions and employers known as union security agreements. These security agreements require certain measures of union support as a condition of employment. The typical ones are the closed shop, where only members of a certain union will be hired; the union shop, where employees must join the union as a condition of employment; the agency shop, where employees who choose not to join the union have to pay a fee to cover those union activities that they benefit from; and the open shop which imposes no conditions. (Closed shop agreements were made illegal by the 1947 Taft-Hartley Act.)

Most contemporary right-to-work laws – currently implemented in 27 states – forbid union shops and agency shops. Union membership cannot be a condition of employment, and non-union members cannot be required to pay agency fees.

The ban on agency fees has generated especially strident opposition from unions. Under the American policy of exclusive representation a union is still required to protect and negotiate on behalf of those employees who choose not to join it. Unions charge non-members agency fees, also known as fair share fees, to defray the cost of representing them. Banning agency fees creates an incentive for workers not to join the union, as they can still reap many of the benefits.

Numbers matter for unions. Employers may be more responsive to concerns about pay, benefits, and safety when many workers come together and voice them. It is uncontroversial right-to-work laws harm unions, and labor organizers argue this is the true purpose of such laws.

According to their advocates, right-to-work laws have two major selling points. The first is that they secure the rights of association/contract of the individual worker in contrast to “compulsory unionism.” The second is that right-to-work laws help the broader economy by attracting businesses to states. These very arguments were made by advocates of the Michigan right-to-work law, such as Republican State Senator Thomas Alberts.

Ostensibly, on freedom of association grounds, workers should have the right to join or not to join unions. On freedom of contract grounds the state should not be interfering with agreements between workers and employers. However, these defenses are incoherent on their face. As multiple scholars have pointed out — including Peter Vallentyne in these very pages — union membership or agency fees are simply a condition of employment and all sorts of conditions of employment are allowed, provided both parties agree to them — from drugs tests to uniforms. If an employee does not like the particular conditions on offer, the freedom of contract/association narrative goes, then they can choose a different job.

One can coherently argue the additional options provided by right-to-work laws are good — it is good for employees to have the option to join companies with or without union membership and with or without agency fees.

But right-to-work laws are not protecting the right to association or contract. Nor does so-called “compulsory unionism” appear obviously more compulsory than other work requirements, even if union membership is perhaps a more substantial requirement than uniforms.

What about the economic argument for right-to-work laws? Are they simply good policy, either for workers or for the state economy? Here the story is more complicated, and it is challenging to isolate the effects of right-to-work laws from the general political and economic background of states. On the one hand, it is often found that right-to-walk laws negatively impact wages. On the other, some studies find that through making states more attractive for businesses, overall state economic benefits compensate for potential lost wages.

The economic argument is treacherous ground though. For the essential claim is that by decreasing the power of workers and unions, states can lure businesses away from other states with more robust labor protections — a race to the bottom. An equally effective response would be to simply ban right-to-work laws at the national level, as some legislation proposes to do.

These arguments not withstanding, the debate at the heart of right-to-work is really a larger question concerning organized labor. There is a compelling historical case that the right-to-work movement in the United States has predominantly been about limiting union power and only nominally about rights or ethics. Similarly, for union supporters the main argument against right-to-work laws has always been that they hurt organized labor.

While requiring union membership as a condition of employment need not violate workers’ rights, most organizers would agree that it is preferable for workers to join and form unions independently. The agency shop, in which employees do not have to join but have to pay for some services, is especially antithetical to the historical intent of unionization. A union, after all, is an organization of and for workers; it is not simply a paid negotiator. Some problems of American labor, such as the tensions caused by exclusive representation, do not occur in many European countries which operate under a very different model. Perhaps there is room for a deeper rethink of what legal landscape does best by the American worker.

Can We Justify Non-Compete Clauses?

photograph of a maze of empty office cubicles

In the State of the Union, President Joe Biden claimed that 30 million workers have non-compete clauses. This followed a proposal by the Federal Trade Commission to ban these clauses. Non-compete clauses normally contain two portions. First, they prohibit employees from using any proprietary information or skills during or after their employment. Second, they forbid employees from competing with their employer for some period of time after their employment. “Competing” normally consists of working for a rival company or starting one’s own business in the same industry.

Non-compete agreements are more common than one might expect. In 2019, the Economic Policy Institute found that about half of employers surveyed use non-compete agreements, and 31.8% of employers require these agreements for all employees.

While we often think of non-compete agreements as primarily being found in cutting-edge industries, they also reach into less innovative industries. For instance, one in six workers in the food industry have a non-compete clause. Even fast-food restaurants have used non-compete clauses.

Proponents of non-compete clauses argue that they are important to protect the interests of employers. The Bureau of Labor Statistics estimates that, in 2021, the average turnover rate (the rate at which employees leave their positions) in private industry was 52.4%. Although one might think the pandemic significantly inflated these numbers, rates from 2017-2019 were slightly below 50%. Businesses spend a significant amount of money training new employees, so turnover hurts the company’s bottom line  – U.S. companies spent over $100 billion on training employees in 2022. Additionally, the transfer of skilled and knowledgeable employees to competitors, especially when those skills and knowledge were gained at their current position, makes it more difficult for the original employer to compete against rivals.

However, opponents argue that non-compete clauses depress the wages of employees. Being prohibited from seeking new employment leaves employees unable to find better paying positions, even if just for the purposes of bargaining with their current employer. The FTC estimates that prohibiting non-compete clauses would cause yearly wage increases in the U.S. between $250 and $296 billion. Further, for agreements that extend beyond their employment, departing employees may need to take “career detours,” seeking jobs in a different field. This undoubtedly affects their earnings and makes finding future employment more difficult.

It is worth noting that these arguments are strictly economic. They view the case for and against non-compete clauses exclusively in terms of financial costs and benefits. This is certainly a fine basis for policy decisions. However, sometimes moral considerations prevail over economic ones.

For instance, even if someone provided robust data demonstrating that child labor would be significantly economically beneficial, we would find this non-compelling in light of the obvious moral wrongness. Thus, it is worthwhile to consider whether there’s a case to be made that we have moral reason to either permit or prohibit non-compete agreements regardless of what the economic data show us.

My analysis will focus on the portion of non-compete clauses that forbids current employees from seeking work with a competitor. Few, I take it, would object to the idea that companies should have the prerogative to protect their trade secrets. There may be means to enforce this without restricting employee movement or job seeking, such as through litigation. Thus, whenever I refer to non-compete agreements or clauses, I mean those which restrict employees from seeking work from, or founding, a competing firm both during, and for some period after, their employment.

There’s an obvious argument for why non-compete clauses ought to be permitted – before an employee joins a company, they and their employer reach an agreement about the terms of employment which many include these clauses. Don’t like the clause? Then renegotiate the contract before signing or simply find another job.

Employers impose all sorts of restrictions on us, from uniforms to hours of operation. If we find those conditions unacceptable, we simply turn the job down. Why should non-compete agreements be any different? They are merely the product of an agreement between consenting parties.

However, agreements are normally unobjectionable only when the parties enter them as equals. When there’s a difference in power between parties, one may accept terms that would be unacceptable between equals. As Evan Arnet argues in his discussion of a prospective right to strike, a background of robust workers’ rights is necessary to assure equal bargaining power and these rights are currently not always secure. For most job openings, there are a plethora of other candidates available. Aside from high-level executives, few have enough bargaining power with their prospective employer to demand that a non-compete clause be removed from their contract. Indeed, even asking for this could cause a prospective employer to move on to the next candidate. So, we ought to be skeptical of the claim that workers freely agree to non-compete clauses – there are multiple reasons to question whether workers have the bargaining power necessary for this agreement to be between equals.

One might instead appeal to the long-run consequences of not allowing non-compete agreements. The argument could be made as follows. By hiring and training employees, businesses invest in them and profit from their continued employment. So perhaps the idea is that, after investing in their employees, a firm deserves to profit from their investment and thus the employee should not be permitted to seek exit while still employed. Non-compete clauses are, in effect, a way for companies to protect their investments.

Unfortunately, there are multiple problems with this line of reasoning. The first is that it would only apply to non-compete agreements in cases where employees require significant training. Some employees may produce profit for the company after little to no training. Second, this seems to only justify non-compete clauses up to the point when the employee has become profitable to the employer – not both during and after employment. Third, some investments may simply be bad investments. Investing is ultimately a form of risk taking which does not always pay off. To hedge their bets, firms can instead identify candidates most likely to stay with the company, and make continued employment desirable.

Ultimately, these argument regarding what a company “deserves” lays bare the fundamental moral problem with non-compete agreements: they violate the autonomy of employees.

Autonomy, as a moral concept, is about one’s ability to make decisions for oneself – to take an active role in shaping one’s life. To say that an employee owes it to her employer to keep working there is to say that she does not deserve autonomy about what does with a third of her waking life. It says that she no longer has the right to make unencumbered decisions about what industry she will work in, and who she will work for.

And this is, ultimately, where we see the moral problem for non-compete clauses. Even if they do not suppress wages, non-compete agreements restrict the autonomy of employees. Unless someone has a large nest egg saved up, they may not be able to afford to quit their job and enter a period of unemployment while they wait for a non-compete clause to expire. Especially since voluntarily quitting may disqualify you from unemployment benefits. By raising the cost of exit, non-compete clauses may eliminate quitting as a viable option. As I have discussed elsewhere, employers gain control over our lives while we work for them – non-compete agreements aim to extend this control beyond the period of our employment, further limiting our ability to choose.

As a result, even if there were not significant potential financial benefits to eliminating non-compete agreements, there seems to be a powerful moral reason to do so. Otherwise, employers may restrict the ability of employees to make significant decisions about their own lives.

Unions and Worker Agency

photograph of workers standing together, arms crossed

The past few years have seen a resurgence of organized labor in the United States, with especially intense activity in just the past few months. This includes high profile union drives at Starbucks, Amazon, the media conglomerate Condé Nast, and even MIT.

Parallel to this resurgence is the so-called “Great Resignation.” As the frenetic early days of the pandemic receded into the distance, workers began quitting at elevated rates. According to the Pew Research Center, the three main reasons for quitting were low pay, a lack of opportunity for advancement, and feeling disrespected. Former U.S. Secretary of Labor Robert Reich even analogized it to a general strike, in which workers across multiple industries stop work simultaneously.

Undoubtedly, the core cause of both the Great Resignation and growing organized labor are the same – dissatisfaction with working conditions – but they are also importantly different. The aim of quitting is to leave the workplace, the aim of unions and strikes are to change it. They do this by trying to shift the balance of power in the workplace and give more voice and agency to workers.

Workplaces are often highly hierarchical with orders and direction coming down from the top, controlling everything from mouse clicks to uniforms. This has even led some philosophers, like the noted political philosopher Elizabeth Anderson, to refer to workplaces as dictatorships. She contends that the workplace is a blind spot in the American love for democracy, with the American public confusing free markets with free workers, despite the often autocratic nature of the workplace. Managers may hold almost all the power in the workplace, even in cases where the actual working conditions themselves are good.

Advocates of greater workplace democracy emphasize “non-domination,” or that at the very least workers should be free from arbitrary exercises of managerial power in the workplace. While legal workplace regulations provide some checks on managerial power, the fact remains that not everything can or should be governmentally regulated. Here, worker organizations like unions can step in. This is especially important in cases where, for whatever reasons, workers cannot easily quit.

Conversations about unionization generally focus on wages and benefits. Unions themselves refer to the advantage of unionization as the “union difference,” and emphasize the increases in pay, healthcare, sick leave, and other benefits compared to non-unionized workplaces. But what causes this difference? Through allowing workers to bargain a contract with management, unions enable workers to be part of a typically management-side discussion about workplace priorities. Employer representatives and union representatives must sit at the same table and come to some kind of agreement about wages, benefits, and working conditions. That is, for good or for ill, unions at least partially democratize the workplace – although it is far from full workplace democracy, in which workers would democratically exercise managerial control.

Few would hold that, all things being equal, workers should not have more agency in the workplace. More likely, their concern is either that worker collectives like unions come at the cost of broader economic interests, or that unions specifically do not secure worker agency but in fact saddle workers with even more restrictions.

The overall economic effect of unions is contentious, but there is little evidence that they hobble otherwise productive industries. A 2019 survey of hundreds of studies on unionization found that while unionization did lower company profits, it did not negatively impact company productivity and decreased overall societal inequality.

More generally, two assumptions must be avoided. The first is that the interests of the workers are necessarily separate from the interests of the company. No doubt company interests do sometimes diverge from union interests, but at a minimum unionized workers still need the company to stay in business. This argument does not apply to public sector unions (government workers), but even there, unions can arguably lead to more invested workers and stronger recruitment.

The second assumption to avoid is that management interests are necessarily company interests. Just as workers may sometimes pursue their personal interests over broader company interest, so too can management. This concern is especially acute when investment groups, like hedge funds, buy a company. Their incentive is to turn a profit on their investment, whether that is best achieved by the long-term health of the company or by selling it for parts. Stock options were historically proposed as a strategy to tie the personal compensation of management to the broader performance of a company. This strategy is limited however, as what it does more precisely is tie management compensation to the value of stock, which can be manipulated in various ways, such as stock buybacks.

Beyond these economic considerations, a worker may also question whether their individual agency in the workplace is best represented by a union. Every organization is going to bring some strictures with it, and this can include union regulations and red tape. The core argument on behalf of unions as a tool for workplace agency is that due to asymmetries of power in the workplace, the best way for workers to have agency is collective agency. This is especially effective for goals that are shared widely among workers, such as better pay. Hypothetically, something like a fully democratic workplace (or having each individual worker well positioned to be part of company decision making) would be better for worker agency than unions. The question of whether these alternatives would work is more practical than ethical.

There can be other tensions between individual and collective agency. In America specifically, unions have been viewed as highly optional. The most potent union relationship is a “closed shop,” in which a union and company agree to only hire union workers. Slightly less restrictive is a “union shop,” under which all new workers must join the union. Both are illegal in the United States under the 1947 Taft Hartley Act, which restricted the power of unions in several ways. State-level  “right to work” laws go even further, forbidding unions from negotiating contracts that automatically deduct union representation fees from employees. The argument is one of personal freedom – that if someone is not in the union they should not have to pay for it. The challenge is that the union still has to represent this individual, who benefits from the union they are not paying for. This invites broader questions about the value of individual freedoms, and how they must be calibrated with respect to the collective good.

 

The author is a member of Indiana Graduate Workers Coalition – United Electrical Workers, which is currently involved in a labor dispute at Indiana University Bloomington.

“Severance,” Identity and Work

split image of woman worrying

The following piece discusses the series Severance. I avoid specific plot details. But if you want to go into the show blind, stop reading now.

Severance follows a group of employees at Lumon Industries, a biotech company of unspecified purpose. The main characters have all received a surgery before starting this job. Referred to as the “severance” procedure, this surgery causes a split in the patient’s personality. After surgery, patients awaken to find that while they have factual memories, they have no autobiographical memories – one character cannot remember her name or the color of her mother’s eyes but remembers that Delaware is a state.

However, the severance procedure does not cause irreversible amnesia. Rather, it creates two distinct aspects of one’s personality. One, called the outie, is the individual who was hired by Lumon and agreed to the procedure. However, when she goes to work, the outie loses consciousness and another aspect, the innie, awakens. The innie has no shared memories with the outie. She comes to awareness at the start of each shift, the last thing she remembers being walking to the exit the previous day. Her life is an uninterrupted sequence of days at the office and nothing else.

Before analyzing the severance procedure closer, let us take a few moments to consider some trends about work. As of 2017, 2.6 million people in the U.S. worked on-call, stopping and starting at a moment’s notice. Our smartphones leave us constantly vulnerable to emails or phone calls that pull us out of our personal lives. The pandemic and the corresponding need for remote, at-home work only accelerated the blurring of lines between our personal lives and spaces, and our work lives. For instance, as workplaces have gone digital, people have begun creating “Zoom corners.” Although seemingly innocuous, practices like these involve ceding control of some of our personal space to be more appealing to our employers and co-workers.

Concerns like these lead Elizabeth Anderson to argue in Private Government that workplaces have become governments. Corporate policies control our behavior when on the clock and our personal activities, which can be easily tracked online, may be subject to the scrutiny of our employers. Unlike with public, democratic institutions where we can shape policy by voting, a vast majority have no say in how their workplace is run. Hence this control is totalitarian. Further, “low skilled” and low-wage workers – because they are deemed more replaceable – are even more subject to their employer’s whims. This increased vulnerability to corporate governance carries with it many negative consequences, on top of those already associated with low income.

Some consequences may be due to a phenomenon Karl Marx called alienation. When working you give yourself up to others. You are told what to produce and how to produce it. You hand control of yourself over to someone or something else. Further, what you do while on the clock significantly affects what you want to do for leisure; even if you loved gardening, surely you would do something else to relax if your job was landscaping. When our work increasingly bleeds into our personal lives, our lives cease to be our own.

So, we can see why the severance procedure would have appeal. It promises us more than just balance between work and life, it makes it impossible for work to interfere with your personal life; your boss cannot email you with questions about your work on the weekend and you cannot be asked to take a project home if you literally have no recollection of your time in the office. To ensure that you will always leave your work at the door may sound like a dream to many.

Further, one might argue that the severance procedure is just an exercise of autonomy. The person agreeing to work at Lumon agrees to get the procedure done and we should not interfere with this choice. At best, it’s like wearing a uniform or following a code of conduct; it’s just a condition of employment which one can reject by quitting. At worst, it’s comparable to our reactions to “elective disability”; we see someone choosing a medical procedure that makes us uncomfortable, but our discomfort does not imply someone should not have the choice. We must not interfere with people’s ability to make choices that only affect themselves, and the severance procedure is such a choice.

Yet the show itself presents the severance procedure as morally dubious. Background TV programs show talking heads debating it, activists known as the “Whole Mind Collective” are campaigning to outlaw severance, and when others learn that the main character, Mark, is severed, they are visibly uncomfortable and uncertain what to say. So, what is the argument against it?

To explain what is objectionable about the severance procedure, we need to consider what makes us who we are. This is an issue referred to in philosophy as “personal identity.” In some sense, the innie and the outie are two parts of the same whole. No new person is born because of the surgery and the two exist within the same human organism; they share the same body and the same brain.

However, it is not immediately obvious that people are simply organisms. A common view is that a significant portion, if not all, of our identity deals with psychological factors like our memories. To demonstrate this, consider a case that Derek Parfit presented in Reasons and Persons. He refers to this case as the Psychological Spectrum. It goes roughly as follows:

Imagine that a nefarious surgeon installed a microchip on my brain. This microchip is connected to several buttons. As the surgeon presses each button, a portion of my memories change to Napoleon Bonaparte’s memories. When the surgeon pushes the last button, I would all of and only Napoleon’s memories.

What can we say about this case? It seems that, after the doctor presses the last button Nick no longer exists. It’s unclear when I stopped existing – after a few buttons, there seems to be a kind of weird Nick-Napoleon hybrid, who gradually goes full Napoleon. Nonetheless, even though Nick the organism survives, Nick the person does not.

And this allows us to see the full scope of the objection to the severance procedure. The choice is not just self-regarding. When one gets severed, they are arguably creating a new person. A person whose life is spent utterly alienated. The innie spends her days performing the tasks demanded of her by management. Her entire life is her work. And what’s more troubling is that this is the only way she can exist – any attempts to leave will merely result in the outie taking over, having no idea what happened at work.

This reveals the true horror of what Severance presents to us. The protagonists have an escape from increasing corporate protrusion into their personal lives. But this release comes at a price. They must wholly sacrifice a third of their lives. For eight hours a day, they no longer exist. And in that time, a different person lives a life under the thumb of a totalitarian government she has no bargaining power against.

The world of Severance is one without a good move for the worker. She is personally subject to private government which threatens to consume her whole life, or she severs her work and personal selves. Either way, her employer wins.

Hybrid Workplaces and Epistemic Injustice

photograph of blurred motion in the office

The pandemic has, among other things, been a massive experiment in the nature of work. The percentage of people who worked from home either part- or full-time jumped massively over the past year, not by design but by necessity. We are, however, nearing a time in which people may be able to return to working conditions that existed pre-pandemic, and there have thus been a lot of questions about what work will look like going forward. Recent studies have indicated that while many people want to continue working from home at least some of the time, many also miss face-to-face interactions with coworkers, not to mention having a reason to get out of the house every once in a while. Businesses may also have financial incentives to have their employees working from home more often in the future: having already invested in the infrastructure needed to have people work from home over the past year, businesses could save money by not having to pay for the space for all their employees to work in-person at once. Instead of having everyone return to the office, many businesses are thus contemplating a “hybrid” model, with employees splitting their time between the office and home.

While a hybrid workplace may sound like the best of both worlds, some have expressed concerns with such an arrangement. Here’s a big one: those who are able to go into the office more frequently will be more visible, and thus may be presented with more opportunities for advancement than those who spend most of their working hours at home. There are many reasons why one might want or need to work from home more frequently, but one significant reason is that one has obligations to care for children or other family members. This may result in greater gender inequalities in the workplace, as women who take on childcare responsibilities will especially be at a disadvantage in comparison to single men who are able to put in a full workweek in the office.

Hybrid workplaces thus risk creating injustices, in which some employees will be unfairly disadvantaged, even if it is not the explicit intention of the employer. While these potential disadvantages have been portrayed in terms of opportunities for advancement, here I want to discuss another potential form of disadvantage which could result in injustices of a different sort, namely epistemic injustices.

Epistemic injustices are ones that affect people in terms of their capacities as knowers. For instance, if you know something but are unfairly treated as if you don’t, or are not taken as seriously as you should be, then you may be experiencing an epistemic injustice. Or, you might be prevented from gaining or sharing knowledge, not because you don’t have anything interesting to contribute, but because you’re unfairly being left out of the conversation. While anyone can experience epistemic injustice, marginalized groups that are negatively stereotyped and underrepresented in positions of power are especially prone to be treated as lacking knowledge when they possess it, and to be left out of opportunities to gain knowledge and share the knowledge they possess.

We can see, then, how hybrid workplaces may contribute to a disparity not only in terms of opportunities for advancement, but also in terms of epistemic opportunities. These are not necessarily unrelated phenomena: for instance, if those who are able to put in more hours in the office are more likely to be promoted, then they will also have more opportunities to gain and share knowledge pertinent to the workplace. There may also be more subtle ways in which those working from home can be left out of the conversation. For instance, one can still be in communication with their fellow employees from home (via virtual meetings, chats, etc.), they will miss out on the more organic interactions that occur when people are working face-to-face. It tends to be easier to just walk over to a coworker if you have a question then to schedule a Zoom call, a convenience that can result in some people being asked for their input much more frequently than others.

Of course, those working in hybrid environments do not need to have any malicious intent to contribute to epistemic injustices. Again, consider a situation in which you and a colleague are able to go back to the office on a full-time basis. You are likely to acquire a lot more information from that colleague who you are able to have quick and easy conversations with than the person working from home whose schedule you need to work around. You might not necessarily think that one of your colleagues is necessarily better than the other, but it’s just easier to talk to the person who’s right over there. What ends up happening, however, is that those who need to work from home more often are gradually going to be left out of the conversation, which will prevent them from being able to contribute in the same way as those working in the office.

These problems are not necessarily insurmountable. Writing in Wired, Sid Sijbrandij, CEO of GitLab, writes that, “Unquestionably sticking to systems and processes that made an office-based model successful will doom any remote model to fail,” and mentions a number of measures that his company has taken to attempt to help remote workers communicate with one another, including “coffee chats” and “all-remote talent shows.” While I cannot in good conscience condone remote talent shows, it is clear that if businesses are going to have concerns of epistemic justice in mind, then making sure that there are more opportunities for there to be open lines of communication, including the possibility for informal conversations with remote workers, will be crucial.

Workers’ Well-Being and Employers’ Duties of Care

photograph of amazon warehouse

If you’ve been working from home during the pandemic then there’s a good chance your employer has sent you an email expressing their concern about your well-being and general level of happiness. Perhaps they’ve suggested some activities you could perform from the comfort of your own home working space, or offered Zoom classes or workshops on things like meditation, exercise, and mindfulness. While most likely well-intentioned, these kinds of emails have become notorious for being out of touch with the scale of the stresses that workers face. It is understandable why: it is, after all, unlikely that a half-hour mindfulness webinar is going to make a dent in the stress accumulated while living in a pandemic over the last year.

It goes without saying that the pandemic has taken a toll on many people’s physical and mental health. And while employers certainly have obligations towards their employees, do they have any specific duties to try to improve the well-being of their employees that has taken a hit during the pandemic?

In one sense, employers clearly do have some obligations towards the happiness and well-being of their employees. Consider, for instance, a recent scandal involving Amazon: the company baldly denied a statement that some Amazon workers were under so much pressure at their jobs that they were unable to take bathroom breaks, and were forced to urinate in bottles instead. Great quantities of evidence were then quickly accumulated that such practices were, in fact, taking place, and Amazon was forced to issue a weak conciliatory reply. It is reasonable in this case to say that Amazon has put their workers in situations in which their well-being is compromised, and they have an obligation to treat them better.

“Don’t make your workers pee in bottles” is an extremely low bar to clear, and it is an indictment of our times that it has to be said at all. People working from home offices, however, are typically not in the same circumstances: while they likely have access to washrooms, their stressors will instead be those that stem from isolation, uncertainty, and many potential additional burdens in the form of needing to care for themselves and others. So, as long as an employer is allowing its employees to meet a certain minimal standard of comfort, and assuming that those working from home during the pandemic meet this standard, do they have any additional obligations to care for employees happiness and well-being?

One might think that the answer to this question is “no.” One reason why we might think this is that we typically regard one’s own happiness as being one’s own responsibility. Indeed, much of the recent narrative on happiness and well-being emphasizes the extent to which we have control over these aspects of our lives. For example, consider a passage from a recent Wall Street Journal article, entitled “Forget What You Think Happiness Is,” that considers how the pandemic has impacted how we conceive of happiness:

“Mary Pipher, clinical psychologist and author of ‘Women Rowing North’ and ‘Reviving Ophelia,’ says the pandemic underscored what she long believed: that happiness is a choice and a skill. This past Christmas, she and her husband spent the day alone in their Lincoln, Neb., home, without family and friends, for the first time since their now adult children were born. ‘I thought, ‘What are we going to do?’ We went out for a walk on the prairie and saw buffalo. I ended up that day feeling really happy.’”

If happiness is a choice then it is not a choice that I can make for you; if happiness is a skill then it’s something you have to learn on your own. Perhaps I can help you out – I can help you learn about happiness activities like gratitude exercises, meditation, and mindfulness – but the rest is then up to you. If this is all we’re able to do for someone else, then perhaps the mindfulness webinars really are all we are entitled to expect from our employers.

There are a couple of worries here. First, to say that “happiness is a choice and a skill” is clearly a gross oversimplification: while serendipitous buffalo sightings will no doubt lift the spirits of many, happiness may not be so easily chosen for those who suffer from depression and anxiety. Second, while there is a lot of hype around the “skills” involved in acquiring happiness, empirical studies of gratitude interventions (as well as the notion of “gratitude” itself), meditation, and mindfulness (especially mindfulness, as discussed here, and here), have had mixed results, with researchers expressing concerns over vague concepts and a general lack of efficacy, especially when it comes to those who are, again, suffering from depression and anxiety. Of course, such studies concern averages across many individuals, meaning that any or all of these activities may work for some while failing to work for others. If you find yourself a member of the former group, then that’s great. A concern, however, is that claims that there are simple skills that can increase happiness are still very much up for debate within the psychological community.

Of course, those working from home will likely have much more practical roots of their decreased happiness; a guided meditation session over Zoom will not, for instance, ameliorate one’s childcare needs. Here, then, is the second worry: there are potentially much more practical measures that employers could take to help increase the happiness and well-being of employees.

For comparison, consider a current debate occurring in my home province of Ontario, Canada: while the federal government has made certain benefits available to those who are forced to miss work due to illness or need to quarantine, many have called on the provincial government to create a separate fund for paid sick days. The idea is that since the former is a prolonged process – taking weeks or months for workers to receive money – this disincentivizes workers to take days off when they may need to. This can result in more people going into work while sick, which is clearly something that should be minimized. The point, then, is that while recommendations for how you can exercise at your desk may be popular among employers, it seems that it would be much more effective to offer more practical solutions to problems of employee well-being, e.g., allowing for more time off.

The question of what an employer owes its employees is, of course, a complex one. While there are clear cases in which corporations fail to meet even the most basic standard of appropriate treatment of its employees – e.g., the recent Amazon debacle – it is up for debate just how much is owed to those with comparatively much more comfortable jobs working from home. Part of the frustration, however, no doubt stems from the fact that if employers are, in fact, concerned about employee well-being, then there are probably better ways of increasing it than offering yet another mindfulness webinar.

The Short- and Long-Term Ethical Issues of Working from Home

photograph of an empty office looking out over city

The COVID-19 pandemic has resulted in a shift of working habits as almost half of the U.S. workforce now are able to work from home. This has led many to ask whether it might be the beginning of a more permanent change in working habits. Such a move carries significant ethically-salient benefits and drawbacks regarding urban development, climate change, mental health, and more.

Despite the apparent novelty of the idea of having many permanently working from home, this was the norm for most people for most of human civilization. It was the industrial revolution and 18th and 19th century reforms to travel which encouraged our need for a separate place of work. Two hundred years ago most people in North America lived and worked on farms. Artisans making textiles and other goods largely working from home. The steam engine allowed for a centralized location that could allow for efficient mass production. Early industrial production even still relied on the “putting-out system,” where centralized factories would make goods and then subcontract the finishing work on the item to people who worked from home. In other words, the concept of “going to work” everyday is a relatively recent invention in human history.

This change had many far-reaching effects. The need to be close to work resulted in urbanization. In the United States the population who lived in urban areas between 1800 and 1900 jumped from 6% to 40%. Urban development and infrastructure followed suit. Artisans who once worked for a price for their goods now worked for a wage for their time. Work that was once governed by sunlight became governed by the clock. Our political and social norms all changed as a result in ways that affect us today. It’s no surprise, for instance that during this time, as employees began working together in a common area, that the first labour unions were formed. Returning to the working habits of our ancestors could have similarly profound effects that are difficult to imagine today, however there are several morally-salient factors that we can identify in a 21st century context.

There are several moral advantages of having more people work from home rather than going to work every day. Working from home during COVID is obviously a move directed at minimizing the spread of the virus. However, permanently working from home also permanently reduces the risk of spreading other infections in the workplace, particularly if it involves less long-distance travel. Approximately 14 million workers in the United Stated are employed in occupations where exposure to disease or infection occurs at least once per week. Reducing physical interaction in the workplace and thereby minimizing infections within it can improve productivity.

In addition, less people going to work means less commuting. 135 million Americans commute to work. Avoiding commute could save an employee up to thousands of dollars per year. The shift has secondary effects as well; less commuting means less wear and tear to public infrastructure like roads and highways and less congestion in urban areas. This is helpful because new infrastructure projects are having a hard time keeping up the increases in traffic congestion. Such changes may also help with tackling climate change, since 30% of U.S. greenhouse gas emissions are for transportation.

On the other hand, it’s possible that working from home could be more harmful to the climate. Research from WSP UK shows that remote working in the UK may only be helpful in the summer. They found that environmental impacts could be higher in the winter due to the need to heat individual buildings instead of a single office building which can be more efficient. In other words, the effect on climate change may not be one-sided.

Working from home can also be less healthy. For example, the concept of the sick day is heavily intertwined with the idea of going to a workplace. The temptation may be to abolish the concept of the sick day with the reasoning being that the whole point of a sick day is to stay home and avoid making co-workers sick. However, even if one can work from home our bodies need rest. Workplace experts have found that those who work from home tend to continue to work during a sickness and this may lengthen recovery time, lead to burn-out, and ultimately lead to less productivity. It can also be unhealthy to develop an “always on” mentality where the line between work and home becomes blurred. According to a recent Monster survey, 51% of Americans admitted to experiencing burnout while working from home as the place of rest and relaxation merges with the place of work. This may have the effect of increasing the number of mental health problems in the workplace while simultaneously making them more physically isolated from fellow workers.

Another potential downside centers on the employer-employee relationship. For example, working from home permanently allows employees to reside in areas where the cost of living is cheaper. This may mean salary reductions since a business will now have a larger pool of potential employees to choose from and thus can offer lower, but still competitive, salaries in areas where the cost of living is cheaper. Facebook has already made moves in this direction. This means job searches will become more competitive and this could drive down salaries even lower. At the same time, large offices will not be needed, and larger urban areas may find decreased economic activity and a drop in the value of office buildings.

The shift also means that an employer is able to infringe on the privacy of home life. Employers are now tracking employees at home to ensure productivity, with software able to track every word typed, GPS location, and even to use a computer’s camera. In some cases, these features can be enabled without an employee even knowing they are being monitored. This will only exacerbate a long-standing ethical concern over privacy in the 21st century.

Finally, it is morally important to recognize that shifting to working from home on a large scale could have disproportionate effects on different communities and different job sectors. The service sector may struggle in areas that no longer have heavy workplace congestion. Also, plumbers and electricians cannot work from home so there are certain industries that literally cannot move in that direction completely. Service industries are often segregated by race and gender, thus ensuring that any of the opportunities enjoyed by working from home will not be equitably shared. It also means that disruptions in these industries caused by the shifting working habits of others could be disproportionately felt.

A permanent shift towards remote working habits carries certain specific moral concerns that will need to be worked out. Whether it will lead to more productivity, less productivity, a greater carbon footprint, a smaller carbon footprint, and so on, will depend on the specific means used and on the new work habits we adopt over the course of time as new laws, policies, and regulations are formulated, tested, and reformed. In the long term, however, the most significant ethical impacts could be the radical social changes it may cause. The shift from working from home to working at work dramatically changed society in less than a century, and the shift back may do the same.

Stories of Vulnerability: COVID-19 in Slaughterhouses

photograph of conveyor line at meat-packing plant

Cases of famous people who have contracted COVID-19 have made headlines. Tom Hanks and Rita Wilson tested positive and later recovered. U.K. Prime Minister Boris Johnson wound up in intensive care. Many professional athletes have contracted the disease. More often than not, however, when we zoom in on coronavirus hotspots, we find that stories about vulnerability come into focus. Many of these stories go unheard unless they cause hardship or inconvenience for groups with more power.

One such case has to do with the production and slaughter of animals that people consume for food. Across the country, there are meat shortages caused by coronavirus. For example, nearly 1 in 5 Wendy’s restaurants has run out of beef, and at many locations other meat products such as pork and chicken are unavailable as well. Supermarkets are also facing shortages. The reason is that the conditions in slaughterhouses are particularly conducive to the spread of coronavirus. Hot spots are popping up at many such sites. 700 employees at a Tyson factory in Perry, Iowa tested positive. At a Tyson plant in Indiana, 900 employees tested positive. According to a CDC report, across 19 states there have been 4,913 cases of coronavirus among slaughterhouse employees. So far, there have been 20 deaths.

Slaughterhouses, also known as meat packing plants, are the next stop for most farm animals after their time in factory farms. When mammals like pigs and chickens arrive, they are put on conveyor belts, stunned, then killed. Their bodies are then sent to a series of “stations” where people carve them up for packaging and, later, consumption.

Work in a slaughterhouse is both physically and psychologically strenuous. Carving flesh and bone requires real effort, and many employees sweat profusely while doing it. The sheer volume of animals that need to be carved up to satisfy the American appetite for meat ensures that employees work together, standing shoulder to shoulder, in spaces that are often poorly ventilated.

This kind of work is not highly sought after for obvious reasons. It is unpleasant. As is so often the case in the United States, unpleasant work is done by those who struggle to find employment—often undocumented immigrants and people living in low-income communities. This complicates the problems with coronavirus spread in several ways. First, employees often do not speak English fluently, so conveying critical information about the virus is difficult. Second, it is common for members of these communities to live in large families or groups. Third, low-income communities are frequently places that are densely populated. All of these factors contribute to more rapid spread of the virus.

In response to the meat shortage, President Trump signed an executive order declaring that meat processing plants are critical infrastructure in the United States. There is disagreement among legal experts about what this means. Some argue that the president doesn’t have the authority to require that slaughterhouses remain open when their continued operation puts employees’ health in jeopardy. One interpretation is that the order simply exempts slaughterhouses from shutdown orders issued by governors. Despite the executive order, plenty of slaughterhouses have closed because they simply don’t have the healthy staff required to carry on.

Those who are supportive of the order are pleased that it provides support to companies that sell meat. Many Americans also approve because it appears that they can continue to put meat on their plates to feed their families and to satisfy their own gustatory preferences. Others approve of the order because they are concerned about the well-being of animal agriculture more broadly. Factory farms raise astonishing numbers of animals every year. The owners of these facilities are not breeding and raising them because they love animals and want thousands of pigs for pets. In these facilities, animals are treated as products to be bought and sold. During the pandemic, new animals are being born and there is no place to put them. The response, in many cases, has been to kill the older animals en masse. For example, Iowa politicians sent a letter to the Trump administration asking for assistance with the disposal of the 700,000 pigs that must now be euthanized each week across the country. The same problem exists for all species of farm animals. People are concerned that this might mean devastation for animal agriculture.

On the other side, many say “good riddance!” Animal agriculture is a cruel and inhumane industry. The pandemic has few silver linings, but one of them is that it brings injustices that might previously have been hidden into the public eye. Our system of animal agriculture could not exist without exploitation of the most vulnerable members of our communities. Slaughterhouses employ vulnerable workers in unsafe working conditions. Factory farms and slaughterhouses abuse and kill animals that cannot defend themselves. Maybe it is finally time for all of this cruelty and suffering to end. In his executive order, President Trump identified slaughterhouses as critical infrastructure. This means that such places are essential, necessary for the proper functioning of our communities. Since consuming the bodies of slaughtered animals is not necessary for human survival, this designation doesn’t seem appropriate.

What’s more, the conditions present in factory farms are exactly the kind that lead to the spread of zoonotic diseases. It appears that the coronavirus jumped from pangolin to human in a wet market in Wuhan. On other occasions, however, diseases spread in factory farms and slaughterhouses—diseases like the swine flu and mad cow disease. Other flus, like the avian flu, are believed to have originated in wet markets in China, but involved animals, chickens and ducks, that we regularly farm for food in the United States. One way that we can help to prevent the transmission and spread of zoonotic diseases is to stop consuming meat.

For those that love the taste of meat, there are alternatives. Beyond Meat and Impossible, plant based products that are engineered to strongly resemble meat in taste, texture, and appearance, are thriving in general, but are doing exceedingly well during the pandemic in particular. In vitro meat, a cellular cultured product that is produced by taking a biopsy of an animal, is a product that is produced in laboratory conditions rather than slaughterhouse conditions and is, therefore, likely to be much safer.

The pandemic shines a light on some of the ways in which our systems of food production exploit the vulnerable—both employees at risk for disease and the animals people put on their plates. Rather than issuing executive orders protecting this industry, perhaps it’s time to dismantle it altogether.

Incentive, Risk, and Oversight in the Pork Industry

photograph of butcher instruction manual with images of different cuts of meat of pig

On September 17th, the U.S. Department of Agriculture announced an updated rule set for pork industry regulators; in addition to removing restrictions on production line speed limits, the Food Safety and Inspection Service (FSIS) will soon allow swine slaughterhouses to hire their own process control inspectors to maintain food safety and humane handling standards instead of relying on monitors. Critics argue that this move is an unconstitutional abuse of power that will likely lead to less secure operations, thereby increasing the risk to animals, workers, and consumers.

Under the current system, hog slaughterhouses are allowed to slaughter a maximum of 1,106 animals per hour (1 pig roughly every 3.5 seconds) and must operate under the watch of multiple FSIS employees. These inspectors review each animal at several points in the killing and disassembly process, ensuring their proper handling, and removing creatures or carcasses from the line that appear to be sickly or otherwise problematic. Notably, these monitors have the authority to both slow down and stop the production line in the interest of preserving sanitary conditions.

But under the New Swine Slaughter Inspection System (NSIS), the limit on per-hour animal slaughter will be removed and pork producers will be allowed to hire employees of their own to replace FSIS inspectors, thereby allowing the FSIS to reassign its monitors elsewhere. Proponents of the move suggest that this deregulation will promote efficiency without increasing overall risk. As Casey Gallimore, a director with the North American Meat Institute (a trade organization supporting pork and other meat producers) explains, the industry’s new hires will be highly trained and FSIS inspectors will still have a presence inside farming operations; whereas a plant might have once had seven government monitors on its production line, “There’s still going to be three on-line [FSIS] inspectors there all of the time.”

Overall, industry groups estimate that, under these new rules, as much as 40% of the federal workforce dedicated to watching over the pork industry will be replaced by pork industry employees. Given that a 2013 audit of FSIS policies indicated that their current implementation was already failing to meet expectations for worker safety and food sanitation, it is unclear how reducing the number of FSIS employees will improve this poor record.

For critics, removing speed limits drastically increases the risk to slaughterhouse employees and introducing corporate loyalty into the monitoring equation further threatens to dilute the effectiveness of already-flimsy federal regulations on slaughterhouse management. Because industry employees will remain beholden to their corporate bosses (at the very least, to the degree that those bosses sign their paychecks), they will have fewer incentives to make decisions that could feasibly impact profitability – particularly slowing or stopping the production line. 

According to Marc Perrone, president of the United Food and Commercial Workers International Union (which represents at least 30,000 employees of the pork industry), “Increasing pork-plant line speeds is a reckless corporate giveaway that would put thousands of workers in harm’s way as they are forced to meet impossible demands.” The FSIS argues that available public data suggests that faster line speeds don’t threaten worker safety; currently, though, there is no national database specifically designed to track packing house injuries and accidents.

It might be the case that industry officials will be able to consistently promote the safety and security of the employees under their care, but a concern reflected by Socrates gives us cause to be skeptical. In Book III of The Republic, Plato has Socrates discuss the nature of the ruling guardian class in his idealized city; often called “philosopher-kings,” Socrates insists that, because the guardians are both naturally inclined to be virtuous individuals, and because they have been carefully trained within a structured society designed to promote their inborn goodness, then the guardians do not, themselves, need guardians of their own – indeed, one of Socrates’ interlocutors even jokes “that a guardian should require another guardian to take care of him is ridiculous indeed.” Centuries before Juvenal asked “But who is to guard the guards themselves?,” Plato argued that the best guards would not actually need guarding at all.

Later philosophers would lack Plato’s optimism; ethicists would construct normative systems with plenty of rules to advise the less virtuous, constitution writers would build layers of checks and balances into divided branches of government, and policy makers would indeed insist on impartiality as a necessary condition for truly effective monitoring. Unless the pork industry can provide us some reason to think that the NSIS inspectors they’ll soon be hiring have been “framed differently by God…in the composition of these [God] has mingled gold” (who have, furthermore, cultivated that virtue over a lifetime of study and practice), we have good reason to be skeptical that they do not, themselves, need watching.

For what it’s worth, Socrates also thought that the guardians should not be allowed to own private property, but that might really be asking too much of the pork industry.

Workers’ Rights in the “Gig Economy”

Working an inflexible nine-to-five schedule is often not conducive to the demands of ordinary life.  Parents find themselves missing events at their children’s schools that occur during the day.  Cautious workers manage their sick days conservatively, not knowing what health challenges the year might bring.  Taking a day to care for personal psychological health strikes many as an impractical luxury.  

Continue reading “Workers’ Rights in the “Gig Economy””