Back to Prindle Institute

Virtual Work and the Ethics of Outsourcing

photograph of Freshii storefornt

Like a lot of people over the past two years, I’ve been conducting most of my work virtually. Interactions with colleagues, researchers, and other people I’ve talked to have taken place almost exclusively via Zoom, and I even have some colleagues I’ve yet to meet in person. There are pros and cons to the arrangement, and much has been written about how to make the most out of virtual working.

A recent event involving Canadian outlets of restraint chain Freshii, however, has raised some ethical questions about a certain kind of virtual working arrangement, namely the use of virtual cashiers called “Percy.” Here’s how it works: instead of an in-the-flesh cashier to help you with your purchase, a screen will show you a person working remotely, ostensibly adding a personal touch to what might otherwise feel like an impersonal dining experience. The company that created Percy explains their business model as follows:

Unlike a kiosk or a pre-ordering app, which removes human jobs entirely, Percy allows for the face-to-face customer experience, that restaurant owners and operators want to provide their guests, by mobilizing a global and eager workforce.

It is exactly this “global and eager workforce” that has landed Freshii in hot water: it has recently been reported that Freshii is using workers who are living in Nicaragua and are paid a mere $3.75 an hour. In Canada, several ministers and labor critics have harshly criticized the practice, with some calling for new legislation to prevent other companies from doing the same thing.

Of course, outsourcing is nothing new: for years, companies have hired overseas contractors to do work that can be done remotely, and at a fraction of the cost of domestic workers. At least in Canada, companies are not obligated to pay outsourced employees a wage that meets the minimum standards of Canadian wage laws; indeed, the company that produces Percy has maintained that they are not doing anything illegal.

There are many worries one could have with the practice of outsourcing in general, primarily among them: that they take away job opportunities for domestic employees, and that they treat foreign employees unfairly by paying them below minimum wage (at least by the standards of the country where the business is located).

There are also some arguments in favor of the practice: in an op-ed written in response to the controversy, the argument is made that while $3.75 is very little to those living in Canada and the U.S., it is more significant for many people living in Nicaragua. What’s more, with automation risking many jobs regardless, wouldn’t it be better to at least pay someone for this work, as opposed to just giving it to a robot? Of course, this argument risks presenting a false dichotomy – one could, after all, choose to pay workers in Nicaragua a fair wage by Canadian or U.S. standards. But the point is still that such jobs provide income for people who need it.

If arguments about outsourcing are old news, then why all the new outrage? There does seem to be something particularly odd about the virtual cashier. Is it simply that we don’t want to be faced with a controversial issue that we know exists, but would rather ignore, or is there something more going on?

I think discomfort is definitely part of the problem – it is easier to ignore potentially problematic business practices when we are not staring them in the virtual face. But there is perhaps an additional part of the explanation, one that raises metaphysical questions about the nature of virtual work: when you work virtually, where are you?

There is a sense in which the answer to this question is obvious: you are wherever your physical body is. If I’m working remotely and on a Zoom call, the place I am would be in Toronto (seeing as that’s where I live) while my colleagues will be in whatever province or country they happen to be physically present in at the time.

When we are all occupying the same Zoom call, however, we are also in another sense in the same space. Consider the following. In this time of transition between COVID and (hopefully) post-COVID times, many in-person events have become hybrid affairs: some people will attend in-person, and some people will appear virtually on a screen. For instance, many conferences are being held in hybrid formats, as are government hearings, trials, etc.

Let’s say that I give a presentation at such a conference, that I’m one of these virtual attendees, and that I participate while sitting in the comfort of my own apartment. I am physically located in one place, but also attending the conference: I might not be able to be there in person, but there’s a sense in which I am still there, if only virtually.

It’s this virtual there-ness that I think makes a case like Percy feel more troubling. Although a Canadian cashier who worked at Freshii would occupy the physical space of a Freshii restaurant in Canada, a virtual cashier would do much of the same work, interact with the same customers, and see and hear most of the same things. In some sense, they are occupying the same space: the only relevant thing that differentiates them from their local counterpart is that they are not occupying it physically.

What virtual work has taught us, though, is that one’s physical presence really isn’t an important factor in a lot of jobs (excluding jobs that require physical labor, in-person contact, and work that is location-specific, of course). If the work of a Freshii cashier does not require physical presence, then it hardly seems fair that one be compensated at a much lower rate than one’s colleagues for simply not being there. After all, if two employees were physically in the same space, working the same job, we would think they should be compensated the same. Why, then, should it matter if one is there physically, and the other virtually?

Again, this kind of unfairness is present in many different kinds of outsourced work, and whether physical distance has ever been a justification for different rates of pay is up for debate. But with physical presence feeling less and less necessary for so many jobs, new working possibilities call into question the ethics of past practices.

“Severance,” Identity and Work

split image of woman worrying

The following piece discusses the series Severance. I avoid specific plot details. But if you want to go into the show blind, stop reading now.

Severance follows a group of employees at Lumon Industries, a biotech company of unspecified purpose. The main characters have all received a surgery before starting this job. Referred to as the “severance” procedure, this surgery causes a split in the patient’s personality. After surgery, patients awaken to find that while they have factual memories, they have no autobiographical memories – one character cannot remember her name or the color of her mother’s eyes but remembers that Delaware is a state.

However, the severance procedure does not cause irreversible amnesia. Rather, it creates two distinct aspects of one’s personality. One, called the outie, is the individual who was hired by Lumon and agreed to the procedure. However, when she goes to work, the outie loses consciousness and another aspect, the innie, awakens. The innie has no shared memories with the outie. She comes to awareness at the start of each shift, the last thing she remembers being walking to the exit the previous day. Her life is an uninterrupted sequence of days at the office and nothing else.

Before analyzing the severance procedure closer, let us take a few moments to consider some trends about work. As of 2017, 2.6 million people in the U.S. worked on-call, stopping and starting at a moment’s notice. Our smartphones leave us constantly vulnerable to emails or phone calls that pull us out of our personal lives. The pandemic and the corresponding need for remote, at-home work only accelerated the blurring of lines between our personal lives and spaces, and our work lives. For instance, as workplaces have gone digital, people have begun creating “Zoom corners.” Although seemingly innocuous, practices like these involve ceding control of some of our personal space to be more appealing to our employers and co-workers.

Concerns like these lead Elizabeth Anderson to argue in Private Government that workplaces have become governments. Corporate policies control our behavior when on the clock and our personal activities, which can be easily tracked online, may be subject to the scrutiny of our employers. Unlike with public, democratic institutions where we can shape policy by voting, a vast majority have no say in how their workplace is run. Hence this control is totalitarian. Further, “low skilled” and low-wage workers – because they are deemed more replaceable – are even more subject to their employer’s whims. This increased vulnerability to corporate governance carries with it many negative consequences, on top of those already associated with low income.

Some consequences may be due to a phenomenon Karl Marx called alienation. When working you give yourself up to others. You are told what to produce and how to produce it. You hand control of yourself over to someone or something else. Further, what you do while on the clock significantly affects what you want to do for leisure; even if you loved gardening, surely you would do something else to relax if your job was landscaping. When our work increasingly bleeds into our personal lives, our lives cease to be our own.

So, we can see why the severance procedure would have appeal. It promises us more than just balance between work and life, it makes it impossible for work to interfere with your personal life; your boss cannot email you with questions about your work on the weekend and you cannot be asked to take a project home if you literally have no recollection of your time in the office. To ensure that you will always leave your work at the door may sound like a dream to many.

Further, one might argue that the severance procedure is just an exercise of autonomy. The person agreeing to work at Lumon agrees to get the procedure done and we should not interfere with this choice. At best, it’s like wearing a uniform or following a code of conduct; it’s just a condition of employment which one can reject by quitting. At worst, it’s comparable to our reactions to “elective disability”; we see someone choosing a medical procedure that makes us uncomfortable, but our discomfort does not imply someone should not have the choice. We must not interfere with people’s ability to make choices that only affect themselves, and the severance procedure is such a choice.

Yet the show itself presents the severance procedure as morally dubious. Background TV programs show talking heads debating it, activists known as the “Whole Mind Collective” are campaigning to outlaw severance, and when others learn that the main character, Mark, is severed, they are visibly uncomfortable and uncertain what to say. So, what is the argument against it?

To explain what is objectionable about the severance procedure, we need to consider what makes us who we are. This is an issue referred to in philosophy as “personal identity.” In some sense, the innie and the outie are two parts of the same whole. No new person is born because of the surgery and the two exist within the same human organism; they share the same body and the same brain.

However, it is not immediately obvious that people are simply organisms. A common view is that a significant portion, if not all, of our identity deals with psychological factors like our memories. To demonstrate this, consider a case that Derek Parfit presented in Reasons and Persons. He refers to this case as the Psychological Spectrum. It goes roughly as follows:

Imagine that a nefarious surgeon installed a microchip on my brain. This microchip is connected to several buttons. As the surgeon presses each button, a portion of my memories change to Napoleon Bonaparte’s memories. When the surgeon pushes the last button, I would all of and only Napoleon’s memories.

What can we say about this case? It seems that, after the doctor presses the last button Nick no longer exists. It’s unclear when I stopped existing – after a few buttons, there seems to be a kind of weird Nick-Napoleon hybrid, who gradually goes full Napoleon. Nonetheless, even though Nick the organism survives, Nick the person does not.

And this allows us to see the full scope of the objection to the severance procedure. The choice is not just self-regarding. When one gets severed, they are arguably creating a new person. A person whose life is spent utterly alienated. The innie spends her days performing the tasks demanded of her by management. Her entire life is her work. And what’s more troubling is that this is the only way she can exist – any attempts to leave will merely result in the outie taking over, having no idea what happened at work.

This reveals the true horror of what Severance presents to us. The protagonists have an escape from increasing corporate protrusion into their personal lives. But this release comes at a price. They must wholly sacrifice a third of their lives. For eight hours a day, they no longer exist. And in that time, a different person lives a life under the thumb of a totalitarian government she has no bargaining power against.

The world of Severance is one without a good move for the worker. She is personally subject to private government which threatens to consume her whole life, or she severs her work and personal selves. Either way, her employer wins.

The Short- and Long-Term Ethical Issues of Working from Home

photograph of an empty office looking out over city

The COVID-19 pandemic has resulted in a shift of working habits as almost half of the U.S. workforce now are able to work from home. This has led many to ask whether it might be the beginning of a more permanent change in working habits. Such a move carries significant ethically-salient benefits and drawbacks regarding urban development, climate change, mental health, and more.

Despite the apparent novelty of the idea of having many permanently working from home, this was the norm for most people for most of human civilization. It was the industrial revolution and 18th and 19th century reforms to travel which encouraged our need for a separate place of work. Two hundred years ago most people in North America lived and worked on farms. Artisans making textiles and other goods largely working from home. The steam engine allowed for a centralized location that could allow for efficient mass production. Early industrial production even still relied on the “putting-out system,” where centralized factories would make goods and then subcontract the finishing work on the item to people who worked from home. In other words, the concept of “going to work” everyday is a relatively recent invention in human history.

This change had many far-reaching effects. The need to be close to work resulted in urbanization. In the United States the population who lived in urban areas between 1800 and 1900 jumped from 6% to 40%. Urban development and infrastructure followed suit. Artisans who once worked for a price for their goods now worked for a wage for their time. Work that was once governed by sunlight became governed by the clock. Our political and social norms all changed as a result in ways that affect us today. It’s no surprise, for instance that during this time, as employees began working together in a common area, that the first labour unions were formed. Returning to the working habits of our ancestors could have similarly profound effects that are difficult to imagine today, however there are several morally-salient factors that we can identify in a 21st century context.

There are several moral advantages of having more people work from home rather than going to work every day. Working from home during COVID is obviously a move directed at minimizing the spread of the virus. However, permanently working from home also permanently reduces the risk of spreading other infections in the workplace, particularly if it involves less long-distance travel. Approximately 14 million workers in the United Stated are employed in occupations where exposure to disease or infection occurs at least once per week. Reducing physical interaction in the workplace and thereby minimizing infections within it can improve productivity.

In addition, less people going to work means less commuting. 135 million Americans commute to work. Avoiding commute could save an employee up to thousands of dollars per year. The shift has secondary effects as well; less commuting means less wear and tear to public infrastructure like roads and highways and less congestion in urban areas. This is helpful because new infrastructure projects are having a hard time keeping up the increases in traffic congestion. Such changes may also help with tackling climate change, since 30% of U.S. greenhouse gas emissions are for transportation.

On the other hand, it’s possible that working from home could be more harmful to the climate. Research from WSP UK shows that remote working in the UK may only be helpful in the summer. They found that environmental impacts could be higher in the winter due to the need to heat individual buildings instead of a single office building which can be more efficient. In other words, the effect on climate change may not be one-sided.

Working from home can also be less healthy. For example, the concept of the sick day is heavily intertwined with the idea of going to a workplace. The temptation may be to abolish the concept of the sick day with the reasoning being that the whole point of a sick day is to stay home and avoid making co-workers sick. However, even if one can work from home our bodies need rest. Workplace experts have found that those who work from home tend to continue to work during a sickness and this may lengthen recovery time, lead to burn-out, and ultimately lead to less productivity. It can also be unhealthy to develop an “always on” mentality where the line between work and home becomes blurred. According to a recent Monster survey, 51% of Americans admitted to experiencing burnout while working from home as the place of rest and relaxation merges with the place of work. This may have the effect of increasing the number of mental health problems in the workplace while simultaneously making them more physically isolated from fellow workers.

Another potential downside centers on the employer-employee relationship. For example, working from home permanently allows employees to reside in areas where the cost of living is cheaper. This may mean salary reductions since a business will now have a larger pool of potential employees to choose from and thus can offer lower, but still competitive, salaries in areas where the cost of living is cheaper. Facebook has already made moves in this direction. This means job searches will become more competitive and this could drive down salaries even lower. At the same time, large offices will not be needed, and larger urban areas may find decreased economic activity and a drop in the value of office buildings.

The shift also means that an employer is able to infringe on the privacy of home life. Employers are now tracking employees at home to ensure productivity, with software able to track every word typed, GPS location, and even to use a computer’s camera. In some cases, these features can be enabled without an employee even knowing they are being monitored. This will only exacerbate a long-standing ethical concern over privacy in the 21st century.

Finally, it is morally important to recognize that shifting to working from home on a large scale could have disproportionate effects on different communities and different job sectors. The service sector may struggle in areas that no longer have heavy workplace congestion. Also, plumbers and electricians cannot work from home so there are certain industries that literally cannot move in that direction completely. Service industries are often segregated by race and gender, thus ensuring that any of the opportunities enjoyed by working from home will not be equitably shared. It also means that disruptions in these industries caused by the shifting working habits of others could be disproportionately felt.

A permanent shift towards remote working habits carries certain specific moral concerns that will need to be worked out. Whether it will lead to more productivity, less productivity, a greater carbon footprint, a smaller carbon footprint, and so on, will depend on the specific means used and on the new work habits we adopt over the course of time as new laws, policies, and regulations are formulated, tested, and reformed. In the long term, however, the most significant ethical impacts could be the radical social changes it may cause. The shift from working from home to working at work dramatically changed society in less than a century, and the shift back may do the same.

Democratic Equality and Free Speech in the Workplace

A close-up photo of the google logo on a building

Numerous news outlets have by now reported on the contentious memo published by former Google employee, James Damore, in which he criticized his former employer’s efforts to increase diversity in their workforce. The memo, entitled “Google’s Ideological Echo Chamber: How bias clouds our thinking about diversity and inclusion,” claims that Google’s diversity efforts reflect a left-leaning political bias that has repressed open and critical discussion on the fairness and effectiveness of these efforts. Moreover, the memo surmises that the unequal representation of men and women in the tech business is due to natural differences in the distribution of personality traits between men and women, rather than sexism.

Continue reading “Democratic Equality and Free Speech in the Workplace”

The Dangers of Ethical Fading in the Workplace

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


Suppose your boss asks you to fudge certain numbers on a business report on the same week the company is conducting layoffs. Is this an ethical dilemma, a financial dilemma, or seeing as it will affect your family, a social dilemma? Likely, all three are true, and more layers exist beneath the surface. Are you in debt from taking a luxurious vacation? Do you have children in college? Are you hoping to get a promotion soon? Research shows that navigating through these many layers makes it increasingly difficult to see the ethical dilemma. This describes “ethical fading,” the process by which individuals are unable to see the ethical dimensions of a situation due to overriding factors.

Ann Tenbrunsel first described ethical fading in 2004 as, “the process by which the moral colors of an ethical decision fade into bleached hues that are void of moral implications.” Since moral decisions are made in the same parts of the brain that process emotions, moral decisions are made almost automatically, instinctively, and therefore are prone to self-deception. Self-deception appears in the workplace when employees see an ethical dilemma as firstly a financial dilemma or personal dilemma instead. Seeing a dilemma, such as polishing numbers in a report, as a choice that could affect personal financial stability allows an individual to make unethical decisions while still referring to themselves as an ethical person. In fact, ethical fading eliminates the awareness that one is making an unethical decision in the first place.

This phenomenon can manifest in a variety of ways, making ethical fading a difficult problem to tackle. Sometimes, an individual replaces the idea of an ethical dilemma with a financial or personal dilemma. Sometimes an individual is under so much pressure that an ethical dilemma passes through them unseen. In other cases, individuals are exposed to ethical dilemmas so often that they become jaded.

Tenbrunsel argues that ethics training in companies is null and void if ethical fading is occurring. No amount of training can teach an individual how to navigate an ethical dilemma if one doesn’t see an ethical dilemma in the first place. One recent case study of ethical fading is with college administration. In 2009, The University of Illinois was found to have a hidden admissions process that pushed through applicants with significant ties to politicians, donors, and university officials. Since the ethical dilemma was lost in the culture and organizational structure of the university’s administration, this case has been deemed an example of ethical fading. Michael N. Bastedo, director of the Michigan’s Center for the Study of Higher and Postsecondary Education, stated that a growing number of college administrations are “starting to see ethical problems as system problems.”

Like other examples of ethical fading, budget cuts were pressuring the administration to reach out to donors more, and the ethical problem of giving preferential treatment to certain applicants was forgotten. Following Tenbrunsel’s argument, this problem wouldn’t be remedied with ethics training, unless the hidden applications system was fixed as well. Since those inside the administration didn’t see the hidden application system as an ethical problem in the first place, ethics training wouldn’t prompt employees to come forward and fix the application system.

A similar incident has been occurring in the military as well. In 2015, a study by Army War College professors Leonard Wong and Stephen J. Gerras found that lying is rampant in the military, and is likely caused by the immense physical and emotional strain that soldiers experience. Ethical fading in this case means that Army officers have become “ethically numb” to the consequences of lying. When the professors pressed their participants on how they manage juggling their many duties, classic sugar-coat phrases often heard in the business sector were reported. In order to satisfy their many duties and requirements, Army officers routinely resort to deception in the form of “hand-waving, fudging, massaging, and checking the box.” This case reveals that financial strain is not the only cause of ethical fading, but physical and emotional strain as well, and that sectors besides business are prone to ethical fading in their employees.

Tenbrunsel’s argument for self-deception provides yet another obstacle for business ethics. If the cause of unethical behavior isn’t caused by a lack of information and training, but the human trait of self-deception, no amount of ethics seminars will discourage unethical behavior. As a start, ethics training should include information on how to spot ethical fading, overcome prejudices, and tips to handle emotional strain in the workplace. However, ethical fading helps address the fact that unethical behavior is not limited to unethical people. Tenbrunsel points out the fact that everyone practices self-deception at some point, and this may be the start to addressing unethical behavior in the workplace properly. Addressing unethical behavior as a human tendency will hopefully start to fill the gaps in current ethics training programs. If not, ethical dilemmas will continue to be sugar-coated and slip through the cracks.

Mike Pence’s Marital Practices: Workplace Accommodation or Discrimination?

On March 28th, a Washington Post profile on Mike Pence’s wife, Karen Pence, emphasized the closeness in their marriage by reiterating a controversial policy of theirs: Mike Pence does not eat alone with any woman besides Karen, nor does he attend any event that has alcohol present without her. While some laud this commitment to honoring and protecting his marriage, others have voiced concerns about the practicality of following such a rule and fairly performing the roles of his professional position.

Continue reading “Mike Pence’s Marital Practices: Workplace Accommodation or Discrimination?”