← Return to search results
Back to Prindle Institute

Can We Justify Non-Compete Clauses?

photograph of a maze of empty office cubicles

In the State of the Union, President Joe Biden claimed that 30 million workers have non-compete clauses. This followed a proposal by the Federal Trade Commission to ban these clauses. Non-compete clauses normally contain two portions. First, they prohibit employees from using any proprietary information or skills during or after their employment. Second, they forbid employees from competing with their employer for some period of time after their employment. “Competing” normally consists of working for a rival company or starting one’s own business in the same industry.

Non-compete agreements are more common than one might expect. In 2019, the Economic Policy Institute found that about half of employers surveyed use non-compete agreements, and 31.8% of employers require these agreements for all employees.

While we often think of non-compete agreements as primarily being found in cutting-edge industries, they also reach into less innovative industries. For instance, one in six workers in the food industry have a non-compete clause. Even fast-food restaurants have used non-compete clauses.

Proponents of non-compete clauses argue that they are important to protect the interests of employers. The Bureau of Labor Statistics estimates that, in 2021, the average turnover rate (the rate at which employees leave their positions) in private industry was 52.4%. Although one might think the pandemic significantly inflated these numbers, rates from 2017-2019 were slightly below 50%. Businesses spend a significant amount of money training new employees, so turnover hurts the company’s bottom line  – U.S. companies spent over $100 billion on training employees in 2022. Additionally, the transfer of skilled and knowledgeable employees to competitors, especially when those skills and knowledge were gained at their current position, makes it more difficult for the original employer to compete against rivals.

However, opponents argue that non-compete clauses depress the wages of employees. Being prohibited from seeking new employment leaves employees unable to find better paying positions, even if just for the purposes of bargaining with their current employer. The FTC estimates that prohibiting non-compete clauses would cause yearly wage increases in the U.S. between $250 and $296 billion. Further, for agreements that extend beyond their employment, departing employees may need to take “career detours,” seeking jobs in a different field. This undoubtedly affects their earnings and makes finding future employment more difficult.

It is worth noting that these arguments are strictly economic. They view the case for and against non-compete clauses exclusively in terms of financial costs and benefits. This is certainly a fine basis for policy decisions. However, sometimes moral considerations prevail over economic ones.

For instance, even if someone provided robust data demonstrating that child labor would be significantly economically beneficial, we would find this non-compelling in light of the obvious moral wrongness. Thus, it is worthwhile to consider whether there’s a case to be made that we have moral reason to either permit or prohibit non-compete agreements regardless of what the economic data show us.

My analysis will focus on the portion of non-compete clauses that forbids current employees from seeking work with a competitor. Few, I take it, would object to the idea that companies should have the prerogative to protect their trade secrets. There may be means to enforce this without restricting employee movement or job seeking, such as through litigation. Thus, whenever I refer to non-compete agreements or clauses, I mean those which restrict employees from seeking work from, or founding, a competing firm both during, and for some period after, their employment.

There’s an obvious argument for why non-compete clauses ought to be permitted – before an employee joins a company, they and their employer reach an agreement about the terms of employment which many include these clauses. Don’t like the clause? Then renegotiate the contract before signing or simply find another job.

Employers impose all sorts of restrictions on us, from uniforms to hours of operation. If we find those conditions unacceptable, we simply turn the job down. Why should non-compete agreements be any different? They are merely the product of an agreement between consenting parties.

However, agreements are normally unobjectionable only when the parties enter them as equals. When there’s a difference in power between parties, one may accept terms that would be unacceptable between equals. As Evan Arnet argues in his discussion of a prospective right to strike, a background of robust workers’ rights is necessary to assure equal bargaining power and these rights are currently not always secure. For most job openings, there are a plethora of other candidates available. Aside from high-level executives, few have enough bargaining power with their prospective employer to demand that a non-compete clause be removed from their contract. Indeed, even asking for this could cause a prospective employer to move on to the next candidate. So, we ought to be skeptical of the claim that workers freely agree to non-compete clauses – there are multiple reasons to question whether workers have the bargaining power necessary for this agreement to be between equals.

One might instead appeal to the long-run consequences of not allowing non-compete agreements. The argument could be made as follows. By hiring and training employees, businesses invest in them and profit from their continued employment. So perhaps the idea is that, after investing in their employees, a firm deserves to profit from their investment and thus the employee should not be permitted to seek exit while still employed. Non-compete clauses are, in effect, a way for companies to protect their investments.

Unfortunately, there are multiple problems with this line of reasoning. The first is that it would only apply to non-compete agreements in cases where employees require significant training. Some employees may produce profit for the company after little to no training. Second, this seems to only justify non-compete clauses up to the point when the employee has become profitable to the employer – not both during and after employment. Third, some investments may simply be bad investments. Investing is ultimately a form of risk taking which does not always pay off. To hedge their bets, firms can instead identify candidates most likely to stay with the company, and make continued employment desirable.

Ultimately, these argument regarding what a company “deserves” lays bare the fundamental moral problem with non-compete agreements: they violate the autonomy of employees.

Autonomy, as a moral concept, is about one’s ability to make decisions for oneself – to take an active role in shaping one’s life. To say that an employee owes it to her employer to keep working there is to say that she does not deserve autonomy about what does with a third of her waking life. It says that she no longer has the right to make unencumbered decisions about what industry she will work in, and who she will work for.

And this is, ultimately, where we see the moral problem for non-compete clauses. Even if they do not suppress wages, non-compete agreements restrict the autonomy of employees. Unless someone has a large nest egg saved up, they may not be able to afford to quit their job and enter a period of unemployment while they wait for a non-compete clause to expire. Especially since voluntarily quitting may disqualify you from unemployment benefits. By raising the cost of exit, non-compete clauses may eliminate quitting as a viable option. As I have discussed elsewhere, employers gain control over our lives while we work for them – non-compete agreements aim to extend this control beyond the period of our employment, further limiting our ability to choose.

As a result, even if there were not significant potential financial benefits to eliminating non-compete agreements, there seems to be a powerful moral reason to do so. Otherwise, employers may restrict the ability of employees to make significant decisions about their own lives.

Moral Burnout

photograph of surgeon crying in hospital hallway

Many workers are moving towards a practice of “quiet quitting,” which, though somewhat misleadingly named, involves setting firm boundaries around work and resolving to meet expectations rather than exceed them. But not everyone enjoys that luxury. Doctors, teachers, and other caregivers may find that it is much harder to avoid going above and beyond when there are patients, students, or family members in need.

What happens when you can’t easily scale back from a state of overwork because of the moral demands of your job? It might lead to a specific kind of burnout: moral burnout. Like other varieties of burnout, moral burnout can leave you feeling mentally and physically exhausted, disillusioned with your work, and weakened by a host of other symptoms. Unlike other varieties of burnout, moral burnout involves losing sight of the basic point or meaning of morality itself.

How could this happen? Many people enter caregiving professions out of a desire to help people and do the right thing — out of a deep commitment to morality itself. When people in these professions find that, despite their best efforts, they cannot meet the needs around them, it can be easy to feel defeated.

Over time, the meaning of those moral commitments can become eroded to where all that is left is a sense of obligation or burden without any joy attached to it. The letter of the moral law has survived, but not its spirit.

Moral philosophers often try to defend morality to the immoralist who only cares about themselves and maybe the people around them. But it seems to me that there might be an equally strong challenge from the other side: the hypermoralist who tries to follow morality’s demands as best they can but who is left cold and exhausted, no longer seeing the point of morality though still feeling bound to its dictates. What might the moral philosopher say in defense to this kind of case? It seems that it depends on diagnosing what exactly has gone wrong.

So, what has gone wrong when “moral burnout” appears? First, it seems that, like in normal cases of burnout, the person is not receiving enough support or care themselves. This might be from a systematic failure, such as doctors being unable to get their patients the care they need due to injustices in the healthcare system. It could be from an interpersonal failure, where friends and family members in that person’s life fail to see their needs or adequately support them. Or perhaps it is from an individual failure, such as the person failing to reach out for or accept help.

The main problem is that there is a significant mismatch between the amount of morally significant labor that the person gives and the amount of support and recognition they receive.

This mismatch alone, however, is not enough to explain why the hypermoralist is left cold by morality. Sure, they may feel exhausted and disillusioned with their job or the people around them, but they might say something like “morality is still worthwhile; it’s just that other people aren’t holding up their end of the deal with me.”

What else is required to become disillusioned with morality itself? Especially for those who were raised to take all the responsibility on themselves, it’s easy to misunderstand morality as having to do only with duties to others and not at all with duties to oneself. In this case, the person can fail to properly value or take care of themselves, and lose sight of an important part of morality – self-respect. It is no surprise that this kind of person would become disillusioned.

Even for those who understand the importance of duties to oneself, it can be easy to fall into a similar trap of self-sacrifice if no one else will take responsibility for a clear and present need.

Another possibility is that, even though the person recognizes and works to fulfill duties of self-respect and self-care, they may find themselves caught up in a kind of rule fetishism, where morality becomes merely a list of moral tasks to complete. Self-care becomes another obligation to fulfill, rather than a chance to rest and recuperate. In this state, morality can seem to be a matter solely of burdens and obligations that must be completed, without the sense of meaning that one would normally get from saying a kind word, helping someone else, or standing up for oneself. Perhaps the hypermoralist has lost sight of the possibility of healthier relationships with others, or is unable to set healthy boundaries within their relationships or accept friendship and help from others.

Like friendship, morality is not transactional – it isn’t simply a set of tasks to complete. Morality is essentially relational.

Though praising and blaming ourselves and others for the actions we perform is a core part of our moral practices, these norms allow us to analyze whether we stand in the right relation with ourselves and with others. It is no surprise, then, that the hypermoralist has lost the meaning of morality if they have substituted its relational core of love for self and love for others with a list of tasks and obligations that lack relational context.

So, what can the hypermoralist do to regain a sense of moral meaning? The answer to that question depends on a host of considerations that will vary based on the individual in question. The basic gist, however, is that it’s vital to seek meaningful and healthy relationships and advocate for support when it’s needed. For example, a doctor in an unjust working environment might protest the indifference and profit-motivation of insurance companies who stand in the way of their patients getting the care they need. Ideally, this would not be another task that the doctor takes up alone but one that allows them to be in solidarity with others in their position — meeting people they can trust and rely upon along the way. Seeking out those meaningful and healthy relationships (moral and otherwise) can be tricky. But I hope for all of us that we can find good friends.

Should Pointless Jobs Exist?

Photograph of people at a booth in front of a partially obscured sign that says "Welcome Business Advisors"

Editor’s note: This article contains use of a vulgarity.

In 1899, Thorstein Veblen published “A Theory of the Leisure Class.” Veblen was a Norwegian-American economist who coined the famous term “conspicuous consumption.” Veblen argued that the ostentatious freedom from useful occupation and its symbols, such as excess possessions and elaborate hobbies, established and organized one’s power and status within a social hierarchy. Conspicuous consumption signals social status by displaying one’s dispensation from productive labour.  

One manifestation of such status for high-ranking persons (or organizations) is the proliferation of decorative underlings. These are “specialized servants…useful more for show than for service actually performed…[their] utility comes to consist, in great part, in their conspicuous exemption from productive labour and in the evidence which this exemption affords of their master’s wealth and power.”  

Veblen’s unflinching analysis contrasted with optimistic predictions for social and economic progress in his time. In the nineteenth and early twentieth centuries, both Marxian and capitalist theories foresaw a reduction of labour in the future which would free up workers for self-directed, human-centred pursuits.  

Unfortunately, these prophecies have not been fulfilled. Marx’s proposed six-hour day was never implemented by Soviet regimes. Contemporary capitalism similarly shows little sign of diminishing work hours, flatly contradicting John Maynard Keynes’ prediction that the twenty-first century would usher in a fifteen-hour work week.  

Instead, Veblen’s anthropological observations have again become relevant. Labour has not been reduced commensurately to technological advances, in part due to an increase in service industries. David Graeber, in his recently published book Bullshit Jobs: A Theory (Simon and Schuster, 2018), notes that despite increasing automation of many fields, new service sectors have emerged. These include financial services, academic and health administrators, human resources and public relations professionals, managers, clerks, salespeople, members of traditional service sectors, and what Graeber calls the “subsidiary industries.” Subsidiary industries maintain service sectors by providing still more specific services, such as all-night pizza delivery or dog-washing, for example. All of these fall under the definition of what Graeber calls “bullshit jobs.”

A bullshit job, according to Graeber, is generally indicated by the secret belief of the person who does the job that their work is unnecessary. He acknowledges that this definition can be somewhat subjective – as “there can be no objective measure of social value.”  But Graeber expands his definition. He notes that ill effects to society would be felt fairly quickly if nurses, garbage collectors, teachers, mechanics, and even fiction writers were disappeared. But, he asks, would anything change – or change for the worse – if administrators, public relations personnel, hedge fund managers, subcontractors for subcontractors, sales representatives, telemarketers, and many service industries were eliminated?  

In making his analysis, Graeber highlights the inverse proportion between the social utility of work and its financial recompense in a move that is reminiscent of feminist economic critique (regarding the unpaid or underpaid work of women in health, education, and caring work). The most essential workers – i.e. those who do jobs without which society could not function – are generally underpaid and under-respected (with the notorious exception of doctors). In contrast, many of the “bullshit jobs” Graeber describes are well-compensated. This phenomenon could certainly be read in light of Veblen’s analysis that inessential workers are luxurious expenses designed to prop up the reputation of their employers, corporations, or clients.  

Graeber attributes this state of affairs to a still more disturbing explanation – class division to maintain the power structure of finance capitalism:

Real, productive workers are relentlessly squeezed and exploited. The remainder are divided between a terrorized stratum of the universally reviled unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc.).  

This account is reminiscent of that of philosopher Iris Young, who noted a “professional class,” i.e. those who benefit from the exploitation of the working class and yet are not a part of the capitalist class.  According to this part of the theory, bullshit jobs would function as a buffer between the capitalist and the working classes.

While many who belong to this “bullshit job” class could be considered as privileged relative to most essential workers (always saving the exception of doctors), the existence of bullshit jobs points to a spiritual malaise that Graeber discusses in his text. “How can one even begin to speak of dignity in labour when one secretly feels one’s job should not exist?”

While Graeber and others point to power structures as the root cause of “bullshit jobs,” like Marx, he ascribes an ideological component that justifies them culturally.  The cult of work for work’s sake is one such cultural idea, which Graeber also links to social power structures as their root cause:

“The ruling class has figured out that a happy and productive population with free time on their hands is a mortal danger. (Think of what started to happen when this even began to be approximated in the sixties.) And, on the other hand, the feeling that work is a moral value in itself, and that anyone not willing to submit themselves to some kind of intense work discipline for most of their waking hours deserves nothing, is extraordinarily convenient for them.” (Graeber, page xviii).

While Graeber’s analysis of “bullshit jobs” deserves further analysis, this lens provides a deep look at the distribution of power, labour, capital, leisure, and prestige in contemporary economies. This lens strongly indicates that nineteenth-century observations on capitalism, classism, and consumerism continue to be relevant in theorizing and strategizing solutions to contemporary inequality and to the problem of alienated labor.

The Moral Consequences of Protecting American Jobs

One of President-elect Donald Trump’s key campaign promises was to stop companies from shipping American jobs overseas. Since his election in November, he has already claimed credit for making progress on this promise. The President-elect has claimed credit for stopping Carrier from moving jobs in Indiana to Mexico. More recently, Ford announced that it had cancelled plans to build a new car manufacturing facility in Mexico. The January 3 New York Times article linked to above suggests that Ford’s decision was partially a response to Trump’s plans on trade policy.

Continue reading “The Moral Consequences of Protecting American Jobs”