← Return to search results
Back to Prindle Institute

Workplace Autocracy

image of businessman breaking rocks in a quarry

The popular fast-food restaurant Chipotle has agreed to pay over 300,000 dollars for alleged violations of Washington D.C. child labor laws. Similar violations of child labor laws have occurred at several Sonics in South Carolina – the same state where just a few months earlier a labor contractor was fined over $500,000 for exploitation of migrant farm workers. In fields like finance and tech, where child labor is rare, there are other concerning employer practices, such as extensive digital surveillance of employee computers.

These stories drive home that largely two groups of people decide what happens in the American workplace: The first, management; the second, government. Americans love democracy, but we leave this particular leadership preference at home when we go to work.

Workplaces are often explicitly hierarchical, and workers do not get to choose their bosses. Beyond pay, employers may regulate when we eat, what we wear, what hours we get to spend with family, and whether we have access to certain contraceptives. Worker choice usually boils down to a binary decision: do I take the job or do I leave it. And concerns of money, family, and health insurance often put their thumb on the scale of this ostensibly personal decision.

A good boss may listen to their employees, or allow them significant independence, but this discretion is ultimately something loaned out, never truly resting with the employees. In some cases, either through their own position or through joining a collective organization like a union, a worker may have more leverage in negotiations with their employer, and thus more say in the workplace. Robust workers’ rights and worker protection laws can further strengthen employee agency and negotiating position. All these however preserve an autocratic understanding of the workplace – the power of management is limited only by countermanding power from the government, organized labor, or individual workers.

The philosopher Elizabeth Anderson has referred to the incredible power that employers have over the lives of their employees as “private government.” And yet oddly, Anderson argues, one rarely hears the same concerns voiced about government as about the governing power of management. We fret over government-imposed regulation on companies, but rarely company-imposed regulations on employees. We want our political leadership accountable to those they rule over, but not company leadership. We worry about government surveillance, but not internal workplace surveillance.

What might justify this seemingly incongruous state of affairs?

One answer is to appeal to human nature. We often hear that humans tend to be selfish and hierarchical, and therefore shouldn’t be surprised that our companies end up like this. But this response faces problems on several fronts. First, at best this would explain the autocratic workplace, it would not provide a moral justification for it. Philosophers generally reject that something is good simply because it is natural. Second, biologists dispute this simplistic account of human nature. Humans are highly responsive to social context, and our underlying tendencies seem to be just as cooperative as they are competitive. Finally, it fails to explain why we are concerned by public government overreach but not private government overreach.

Alternatively, we may argue that because the firm is ultimately owned by someone, it is only appropriate that person have control over their employees as long as those employees freely entered into agreement to work there. However, with pay and healthcare on the line and the generally skewed power between employers and employees, only the rarest of employees get to truly negotiate terms with their employer. Many undesirable features of a workplace such as digital surveillance, mandatory arbitration, and noncompete clauses (see Prindle Post coverage) are so widespread as to be inescapable in certain industries. Consequently, it is specious to argue employers are entitled to treat employees as they see fit, simply because employees agreed to work there – although this general argument would hold up better with stronger workers’ rights.

The final, and perhaps the most obvious, answer is simple efficiency. A more autocratic workplace may just work much better from an economic standpoint.

There are two things to note with this line of defense. First, it needs to have limits to preserve human rights and dignity. The American government undoubtedly finds the prohibition against unreasonable searches and seizures inefficient at times, but that does not justify scrapping it. By the same token, a company would presumably not be justified in whipping their employees, no matter how much it increased productivity. Limits need to be set on the power companies have over their workers – indeed, as the government does to some extent currently. However, this does not necessarily speak against workplace autocracy tout court. Second is that the efficiency benefit of workplace autocracy depends on what it is being compared to. Convening an assembly of thousands of workers to make every decision is perhaps insufficiently “agile” for the modern economy, but there are many examples of businesses owned and run cooperatively by their employees. Even something as simple as allowing employees to vote out management could be worth considering. Workplaces may even benefit from more empowered and invested employees.

For Anderson, herself, the current power of private government is best explained not by reasons, but by history. The rise of an obsession with efficiency, burgeoning faith in the liberatory power of free markets, and the collapse of organized labor in America have all conspired to help managerial power in the workplace to grow unquestioned. Agree or disagree with her history, we can follow her in thinking that, like public government, private government is sufficiently impactful and complex to merit robust discussion about its justifications, dangers, preferred implementation, and needed checks and balances.

Can We Justify Non-Compete Clauses?

photograph of a maze of empty office cubicles

In the State of the Union, President Joe Biden claimed that 30 million workers have non-compete clauses. This followed a proposal by the Federal Trade Commission to ban these clauses. Non-compete clauses normally contain two portions. First, they prohibit employees from using any proprietary information or skills during or after their employment. Second, they forbid employees from competing with their employer for some period of time after their employment. “Competing” normally consists of working for a rival company or starting one’s own business in the same industry.

Non-compete agreements are more common than one might expect. In 2019, the Economic Policy Institute found that about half of employers surveyed use non-compete agreements, and 31.8% of employers require these agreements for all employees.

While we often think of non-compete agreements as primarily being found in cutting-edge industries, they also reach into less innovative industries. For instance, one in six workers in the food industry have a non-compete clause. Even fast-food restaurants have used non-compete clauses.

Proponents of non-compete clauses argue that they are important to protect the interests of employers. The Bureau of Labor Statistics estimates that, in 2021, the average turnover rate (the rate at which employees leave their positions) in private industry was 52.4%. Although one might think the pandemic significantly inflated these numbers, rates from 2017-2019 were slightly below 50%. Businesses spend a significant amount of money training new employees, so turnover hurts the company’s bottom line  – U.S. companies spent over $100 billion on training employees in 2022. Additionally, the transfer of skilled and knowledgeable employees to competitors, especially when those skills and knowledge were gained at their current position, makes it more difficult for the original employer to compete against rivals.

However, opponents argue that non-compete clauses depress the wages of employees. Being prohibited from seeking new employment leaves employees unable to find better paying positions, even if just for the purposes of bargaining with their current employer. The FTC estimates that prohibiting non-compete clauses would cause yearly wage increases in the U.S. between $250 and $296 billion. Further, for agreements that extend beyond their employment, departing employees may need to take “career detours,” seeking jobs in a different field. This undoubtedly affects their earnings and makes finding future employment more difficult.

It is worth noting that these arguments are strictly economic. They view the case for and against non-compete clauses exclusively in terms of financial costs and benefits. This is certainly a fine basis for policy decisions. However, sometimes moral considerations prevail over economic ones.

For instance, even if someone provided robust data demonstrating that child labor would be significantly economically beneficial, we would find this non-compelling in light of the obvious moral wrongness. Thus, it is worthwhile to consider whether there’s a case to be made that we have moral reason to either permit or prohibit non-compete agreements regardless of what the economic data show us.

My analysis will focus on the portion of non-compete clauses that forbids current employees from seeking work with a competitor. Few, I take it, would object to the idea that companies should have the prerogative to protect their trade secrets. There may be means to enforce this without restricting employee movement or job seeking, such as through litigation. Thus, whenever I refer to non-compete agreements or clauses, I mean those which restrict employees from seeking work from, or founding, a competing firm both during, and for some period after, their employment.

There’s an obvious argument for why non-compete clauses ought to be permitted – before an employee joins a company, they and their employer reach an agreement about the terms of employment which many include these clauses. Don’t like the clause? Then renegotiate the contract before signing or simply find another job.

Employers impose all sorts of restrictions on us, from uniforms to hours of operation. If we find those conditions unacceptable, we simply turn the job down. Why should non-compete agreements be any different? They are merely the product of an agreement between consenting parties.

However, agreements are normally unobjectionable only when the parties enter them as equals. When there’s a difference in power between parties, one may accept terms that would be unacceptable between equals. As Evan Arnet argues in his discussion of a prospective right to strike, a background of robust workers’ rights is necessary to assure equal bargaining power and these rights are currently not always secure. For most job openings, there are a plethora of other candidates available. Aside from high-level executives, few have enough bargaining power with their prospective employer to demand that a non-compete clause be removed from their contract. Indeed, even asking for this could cause a prospective employer to move on to the next candidate. So, we ought to be skeptical of the claim that workers freely agree to non-compete clauses – there are multiple reasons to question whether workers have the bargaining power necessary for this agreement to be between equals.

One might instead appeal to the long-run consequences of not allowing non-compete agreements. The argument could be made as follows. By hiring and training employees, businesses invest in them and profit from their continued employment. So perhaps the idea is that, after investing in their employees, a firm deserves to profit from their investment and thus the employee should not be permitted to seek exit while still employed. Non-compete clauses are, in effect, a way for companies to protect their investments.

Unfortunately, there are multiple problems with this line of reasoning. The first is that it would only apply to non-compete agreements in cases where employees require significant training. Some employees may produce profit for the company after little to no training. Second, this seems to only justify non-compete clauses up to the point when the employee has become profitable to the employer – not both during and after employment. Third, some investments may simply be bad investments. Investing is ultimately a form of risk taking which does not always pay off. To hedge their bets, firms can instead identify candidates most likely to stay with the company, and make continued employment desirable.

Ultimately, these argument regarding what a company “deserves” lays bare the fundamental moral problem with non-compete agreements: they violate the autonomy of employees.

Autonomy, as a moral concept, is about one’s ability to make decisions for oneself – to take an active role in shaping one’s life. To say that an employee owes it to her employer to keep working there is to say that she does not deserve autonomy about what does with a third of her waking life. It says that she no longer has the right to make unencumbered decisions about what industry she will work in, and who she will work for.

And this is, ultimately, where we see the moral problem for non-compete clauses. Even if they do not suppress wages, non-compete agreements restrict the autonomy of employees. Unless someone has a large nest egg saved up, they may not be able to afford to quit their job and enter a period of unemployment while they wait for a non-compete clause to expire. Especially since voluntarily quitting may disqualify you from unemployment benefits. By raising the cost of exit, non-compete clauses may eliminate quitting as a viable option. As I have discussed elsewhere, employers gain control over our lives while we work for them – non-compete agreements aim to extend this control beyond the period of our employment, further limiting our ability to choose.

As a result, even if there were not significant potential financial benefits to eliminating non-compete agreements, there seems to be a powerful moral reason to do so. Otherwise, employers may restrict the ability of employees to make significant decisions about their own lives.

Essential Work, Education, and Human Values

photograph of school children with face masks having hands disinfected by teacher

On August 21st, the White House released guidance that designated teachers as “essential workers.” One of the things that this means is that teachers can return to work even if they know they’ve been exposed to the virus, provided that they remain asymptomatic. This is not the first time that the Trump administration has declared certain workers or, more accurately, certain work to be essential. Early in the pandemic, as the country experienced decline in the availability of meat, President Trump issued an executive order proclaiming that slaughterhouses were essential businesses. The result was that they did not have to comply with quarantine ordinances and could, and were expected to, remain open. Employees then had to choose between risking their health or losing their jobs. Ultimately, slaughterhouses became flash points for massive coronavirus outbreaks across the country.

As we think about the kinds of services that should be available during the pandemic, it will be useful to ask ourselves, what does it mean to say that work is essential? What does it mean to say that certain kinds of workers are essential? Are these two different ways of asking the same question or are they properly understood as distinct?

It might be helpful to walk the question back a bit. What is work? Is work, by definition, effort put forward by a person? Does it make sense to say that machines engage in work? If I rely on my calculator to do basic arithmetic because I’m unwilling to exert the effort, am I speaking loosely when I say that my calculator has “done all the work”? It matters because we want to know whether our concept of essential work is inseparable from our concept of essential workers.

One way of thinking about work is as the fulfillment of a set of tasks. If this is the case, then human workers are not, strictly speaking, necessary for work to get done; some of it can be done by machines. During a pandemic, human work comes with risk. If the completion of some tasks is essential under these conditions, we need to think about whether those tasks can be done in other ways to reduce the risk. Of course, the downside of this is that once an institution has found other ways of getting things done, there is no longer any need for human employees in those domains on the other side of the pandemic.

Another way of understanding the concept of work is that work requires intentionality and a sense of purpose. In this way, a computer does not do work when it executes code, and a plant does not do work when it participates in photosynthesis. On this understanding of the concept of work, only persons can engage in it. One virtue of understanding work in this way is that it provides some insight into the indignity of losing one’s job. A person’s work is a creative act that makes the world different from the way it was before. Every person does work, and the work that each individual does is an important part of who that person is. If this way of understanding work is correct, then work has a strong moral component and when we craft policy related to it, we are obligated to keep that in mind.

It’s also important to think about what we mean when we say that certain kinds of work are essential. The most straightforward interpretation is to say that essential work is work that we can’t live without. If this is the case, most forms of labor won’t count as essential. Neither schools nor meat are essential in this sense — we can live without both meat and education.

When people say that certain work is essential, they tend to mean something else. For some political figures, “essential” might mean “necessary for my success in the upcoming election.” Those without political aspirations often mean something different too, something like “necessary for maintaining critical human values.” Some work is important because it does something more than keep us alive; it provides the conditions under which our lives feel to us as if they are valuable and worth living.

Currently, many people are arguing for the position that society simply cannot function without opening schools. Even a brief glance at history demonstrates that this is empirically false. The system of education that we have now is comparatively young, as are our attitudes regarding the conditions under which education is appropriate. For example, for much of human history, education was viewed as inappropriate for girls and women. In the 1600’s Anna Maria van Schurman, famous child prodigy, was allowed to attend school at the University of Utrecht only on the condition that she do so behind a barrier — not to protect her from COVID-19 infested droplets, but to keep her very presence from distracting the male students. At various points in history, education was viewed as inappropriate for members of the wealthiest families — after all, as they saw it, learning to do things is for people that actually need to work. There were also segments of the population that for reasons of race or status were not allowed access to education. All of this is just to say that for most of recorded history, it hasn’t been the case that the entire population of children has been in school for seven hours a day. Our current system of K-12 education didn’t exist until the 1930s, and even then there were barriers to full participation.

That said, the fact that such a large number of children in our country have access to education certainly constitutes significant progress. Education isn’t essential in the first sense that we explored, but it is essential in the second. It is critical for the realization of important values. It contributes to human flourishing and to a sense of meaning in life. It leads to innovation and growth. It contributes to the development of art and culture. It develops well-informed citizens that are in a better position to participate in democratic institutions, providing us with the best hope of solving pressing world problems. We won’t die if we press pause for an extended period of time on formal education, but we might suffer.

Education is the kind of essential work for which essential workers are required. It is work that goes beyond simply checking off boxes on a list of tasks. It involves a strong knowledge base, but also important skills such as the ability to connect with students and to understand and react appropriately when learning isn’t occurring. These jobs can’t be done well when those doing them either aren’t safe or don’t feel safe. The primary responsibilities of these essential workers can be satisfied across a variety of presentation formats, including online formats.

In our current economy, childcare is also essential work, and there are unique skills and abilities that make for a successful childcare provider. These workers are not responsible for promoting the same societal values as educators. Instead, the focus of this work is to see to it that, for the duration of care, children are physically and psychologically safe.

If we insist that teachers are essential workers, we should avoid ambiguity. We should insist on a coherent answer to the question essential for what? If the answer is education, then teachers, as essential workers, can do their essential work in ways that keep them safe. If we are also thinking of them as caregivers, we should be straightforward about that point. The only fair thing to do once that is out in the open is to start paying them for doing more than one job.

Does the Fair Chance Act Live Up to Its Name?

close-up photograph of 'Help Wanted' sign in storefront window

With the US having one of the highest incarceration rates in the world, it is estimated that over 70 million Americans have some type of criminal record – that’s approximately one in three Americans. Regardless of how minor or major an individual’s offense is, having any kind of criminal record presents a series of obstacles to successfully reintegrating oneself back into society. The most pronounced of these is finding employment and housing – almost nine in ten employers perform background checks during the hiring process and four in five landlords do the same on prospective occupants. Research shows that employers are biased against citizens with criminal records even though they assert that this is not the case. While employers ostensibly indicate an inclination to hiring ex-convicts, evidence establishes that employer callback rates decrease by 50% for those with a criminal record. 

Crusading against such employment disparities are movements like Ban the Box, an American campaign that began in Hawaii in the late 1990s led by civil rights activists and advocates for ex-offenders, working towards removing the check box that inquires whether a job seeker has a criminal record. This campaign aims at allowing ex-convicts a better chance at employment by spotlighting their skills and qualifications in the recruitment process before being questioned about their criminal record, thereby preventing the stigma of an arrest record or a conviction ruling out their employment immediately. The basis of this campaign is that ex-convicts who struggle to find employment upon being released from prison are more likely to reoffend, which is, of course, damaging to society. 

The campaign gained momentum after the 2007-2009 recession, with activists for the campaign stating that it is necessary to remove the check box because an increasing number of Americans have criminal records as a result of harsh sentencing laws, especially for drug-related offenses and citizens are struggling to find work due to the compounded effect of high unemployment rates for ex-felons and background checks becoming more common since the 9/11 terror attacks. Moreover, marginalized communities like communities of color, sexual minorities and people with mental illnesses are disproportionately affected, with black men being six times more likely to be imprisoned than a white man and LGB (lesbian, gay and bisexual) people being three times more likely to be incarcerated than the general population.

As of 2019, 35 states and more than 150 counties and cities have implemented Ban the Box, also known as fair chance act in their hiring policies, all of which prohibit employers from asking applicants about their criminal history on a preliminary job application. Some Ban the Box laws are more elaborate, compelling employers to refrain from asking about the applicants’ criminal history until a job offer has been made or an interview has been conducted. 

Even though Ban the Box laws seem to be beneficial on the surface, some industry groups such as the National Retail Federation have openly criticized these policies for possibly exposing companies, employees, and customers to crime. The New Jersey Chamber of Commerce also condemned Ban the Box for putting employers at risk of being slapped with lawsuits from rejected applicants. Fair Chance laws put businesses in a vulnerable state, leaving them open to facing lawsuits for rejecting an ex-convict, while also having to deal with the possibility of facing negligent hiring lawsuits if an ex-convict employee reoffends at work. Moreover, businesses have found fault with Fair Chance laws for wasting the time and resources of both employers and applicants. Ban the Box laws could cause ex-convicts to waste their time applying for jobs that they will probably not get, when they could have spent their time working on applications and interviews for jobs that are known to recruit ex-offenders. Additionally, these laws would also be wasting employers’ time because if an ex-con is rejected towards the end of a hiring process after their criminal record is made known, applicants who didn’t have a criminal record but were turned away could have already found another job or could now be interested in other employment opportunities. 

Corporate concerns aside, recent research shows that Ban the Box laws have cultivated an unforeseen impediment to the very objective of the campaign. Researchers have suggested that implementing Fair Chance policies may ultimately be disadvantageous to society as a whole by decreasing chances of employment for low-skilled racial minorities. If prevented from looking into an applicant’s criminal history, employers could recourse to stereotypical assumptions based on the individual’s race or gender to extrapolate on whether or not an individual has a criminal record, which would exacerbate gender and racial disparities in the application pool. 

Ban the Box does better ex-offenders’ chances of finding employment, but on the flip side, minorities seeking employment have to bear the brunt of enhanced racial discrimination both in spite of, and because of, the Fair Chance Act. Activism like Ban the Box can and should be used to make positive social changes and challenge the status quo but at the same time, in light of recent research, must be re-evaluated.

The Cost of Motherhood

Image of a woman holding a young child

Having a child is one of the most impactful decisions a person will make in their life. And yet, this decision affects women much more than it does men. From the physical act of birthing a child to the thousand daily needs encountered in a day, women frequently inhabit what Mary Mellor has called ”biological time”. ”Biological time” is distinct from remunerative, capitalist time in that it includes all the work that is necessary for the maintenance and flourishing of human life, from giving birth and palliative care, to feeding, clothing, providing emotional reassurance, interpersonal interaction, education, laundry, specialist appointments and play dates, birthdays and leisure activities, and health care. This means that women, far from possessing leisure time, have traditionally created it for men by taking care of the innumerable necessities of daily life, including child rearing.

In 2018, it seems strange that we still face a gendered division of labour that was first rationalized in Aristotle’s Politics. Aristotle justified a labour division which grouped women (and slaves) as domestic workers – an arrangement he found reasonable in order to free up the male household head for self-development and the presumably nobler activities of studying philosophy and city governance.

Some strides have been made to close the gender gap in household tasks and caregiving. While the gaps have narrowed somewhat, they are far from closed. Men typically receive adulation and support for the parenting and adulthood tasks which they complete. A man taking his children grocery shopping will likely be perceived by bystanders as a swoon-worthy superhero, while a mother doing the same thing is more likely to be scrutinized. This unfair standard follows women into the workplace, where men who leave early to take care of family members are seen as responsible individuals, but women struggle to be seen as competent and professionally motivated when they do the same thing. White men who have children earn a fatherhood bonus, while women who have children earn 20 percent less in the long-term.

The design of the work week itself is not open to those who are responsible for giving care. Instead, the structure of contemporary labour presupposes a gendered division of labour whereby the worker is freed to devote eight or more consecutive hours daily without interruptions or crises from home. While economists have already critiqued the 40-hour work week, with evidence showing higher productivity and well-being among workers for less and more flexible work hours, companies are slow to follow the evidence. Even in businesses which have implemented these policies, women may avoid taking advantage of proffered flexibility to forestall being judged as “uncommitted”. On-site child-care remains a pipe dream for most professions. Even among Fortune 100 companies, which typically have generous terms towards its employees, only seventeen offer daycare.

Loss of leisure, earnings, workplace respect, and career opportunities are not the only penalties women face in virtue of having a reproductive body. Women bear intimate scrutiny, politicization, policing and even bans for actions regarding all their choices – from contraception to breastfeeding, while condoms, Viagra, and even public urination are taken for granted as essential.

Given these challenges, it is hardly shocking to surmise why young women are choosing to have fewer or no children. Young women realize that the idea that women can ”have it all” remains a cruel joke, and it seems they are responding with pragmatism to harsh facts.

But just as was the case with capitalism’s role in shifting gender roles (though in many cases by increasing women’s work rather than transforming it), we may be headed toward another shift. The post-recession economic challenges Millennial women face may place the zero-sum competition between career and family in a much starker light, to the degree that many are embracing their professional and leisure capacities fully to the point of declining parenthood.

It is clear that women, as individuals, are responding in creative and complex ways to competing social structures that combine to exclude them from ”having it all”. Women are negotiating their limited opportunities to make the best of their singular lives. Nonetheless, the struggles that they face reveal a society where lack of gender parity runs much deeper than numbers. When we look at women’s meager options, they reveal how the structure of late capitalism, imbued with patriarchal assumptions, has made absolutely no provision or priority for caring and the culturing of humans. Women are aware that they subsidize not only career and leisure opportunities for their partners, but also subsume the costs of producing workers, citizens and leaders of society as a whole. It is our collective responsibility to address the lingering absence of care in our economic and social structures that have so marginalized women from full participation in remunerative and political life, separated men from the responsibilities and the humanity of caring labour, and left our social structures and institutions so alienated from the needs of the human spirit.

Are Zero Tolerance Policies the Solution to Sexual Misconduct?

A photo of Senator Al Franken.

This year’s headlines have been dominated by sexual assault and harassment allegations against powerful, wealthy politicians and prominent figures in the entertainment industry.  In many ways, this is old news—people in positions of power have always used that power to sexually exploit and harass those in less powerful positions.  The difference is, until recently, these figures seemed too big to fall.  

Continue reading “Are Zero Tolerance Policies the Solution to Sexual Misconduct?”

Democratic Equality and Free Speech in the Workplace

A close-up photo of the google logo on a building

Numerous news outlets have by now reported on the contentious memo published by former Google employee, James Damore, in which he criticized his former employer’s efforts to increase diversity in their workforce. The memo, entitled “Google’s Ideological Echo Chamber: How bias clouds our thinking about diversity and inclusion,” claims that Google’s diversity efforts reflect a left-leaning political bias that has repressed open and critical discussion on the fairness and effectiveness of these efforts. Moreover, the memo surmises that the unequal representation of men and women in the tech business is due to natural differences in the distribution of personality traits between men and women, rather than sexism.

Continue reading “Democratic Equality and Free Speech in the Workplace”

Workplace Diversity: A Numbers Game

Anyone who has applied for a job is likely familiar with the stress it can bring. Governed by unspoken rules and guidelines that at times seem arbitrary, the hiring process has traditionally been seen as an anxiety-producing but necessary part of starting a career. For some, however, this process is stressful for an entirely different reason: the fear of discrimination by employers. How, then, should the process be reformed to provide a more equitable environment?

Continue reading “Workplace Diversity: A Numbers Game”