← Return to search results
Back to Prindle Institute

How to Spot an Anti-Semitic Trope, and What to Do About It

photograph of Chicago Tribune building

On July 22, 2020, the Chicago Tribune’s lead columnist, John Kass, published a piece entitled “Something grows in the big cities run by Democrats: an overwhelming sense of lawlessness.” In it he claimed that the billionaire George Soros, who is Jewish, is responsible for clandestinely remaking the justice system by spending “millions of dollars to help elect social justice warriors as prosecutors.” The column provoked an enormous backlash, with the executive board of the Tribune reporters’ union, the Chicago Tribune Guild, issuing a letter decrying the column as an “odious, anti-Semitic conspiracy theory.” Kass was subsequently demoted from his perch on page 2 of the newspaper, which he has occupied for 23 years. Nevertheless, he defiantly penned a response in which he declared himself a victim of “cancel culture.”

This ugly episode raises a host of interesting philosophical issues, chief among which are the following: how can we know what counts as an anti-Semitic trope? And what should be done with those who peddle them? I will consider these questions in turn.

Modern anti-Semitism is a conspiracy theory with roots in the Tsarist forgery, “The Protocols of the Elders of Zion.” This document purported to be the minutes of a late-19th-century meeting of Jewish leaders, the titular “Elders,” in which they conspire to conquer the world through such means as control of the economy and the press, and subversion of the morals of the non-Jewish world. Thus, the anti-Semitic conspiracy theory posits a clandestine Jewish scheme to take control of society’s institutions, such as the stock market, the legal system, the education system, and so on, with the ultimate aim of Jewish world rule.

What, then, is an anti-Semitic trope? I suggest that one important kind of anti-Semitic trope is a narrative, or fragment of a narrative, about attempts by powerful Jewish figures to control, subvert, or alter important social institutions, and in particular economic institutions. That narrative can take many, sometimes contradictory forms; for example, the Nazis accused Jews of both predatory capitalism and Bolshevism. Thus, when Kass writes that Soros “remakes the justice system in urban America, flying under the radar,” there is an unmistakable suggestion of the kind of secret effort to alter and control institutions that is characteristic of anti-Semitic thinking in general.

Suppose there is a group of powerful persons, most of whom happen to be Jewish, that actually does seek to control or subvert some important institution. An example might be pro-Israel groups’ efforts to influence U.S. foreign policy, a phenomenon controversially documented by Alan Mearsheimer and Stephen Walt in their book, The Israel Lobby and U.S. Foreign Policy. Would the rough account laid out in the last paragraph make any criticism of these groups’ efforts an anti-Semitic trope? The worry implicit in this question is that labeling such criticisms as “anti-Semitic” would prevent legitimate criticisms of the “Israel lobby” from being made.

Before explaining my response to this worry, it is worth noting other ways of responding to it. We might try to distinguish legitimate criticisms from anti-Semitic tropes by insisting that the latter must be a false narrative — i.e., they must fail to refer to any actual conspiracy or nefarious effort. On this view, an anti-Semitic trope is, as such, a kind of slander. However, this would still leave justified, but false, criticisms vulnerable to being labelled anti-Semitic tropes. Suppose Mearsheimer and Walt were wrong about the existence of an “Israel lobby.” Many would still want to deny that their book traffics in anti-Semitic tropes. On the other hand, suppose that they are correct. One can still imagine an actual anti-Semite condemning the Israel lobby using anti-Semitic tropes.

Perhaps instead we should draw a distinction between conspiracies that are composed of Jews and Jewish conspiracies. A Jewish conspiracy is an effort to subvert or control some important institution on behalf of the Jews, or in the perceived interests of the Jews as a group. Only statements that are meant to refer to a Jewish conspiracy in this sense are trafficking in the kind of anti-Semitic trope defined above. This seems like a promising distinction, but it suggests that in order to know whether some statement expresses this anti-Semitic trope, we need to know what the speaker means by it. Did Kass mean to posit some Jewish conspiracy, or just a conspiracy by someone who happens to be Jewish?

Every utterance has both a literal and a use-meaning (philosophers refer to these as an utterance’s “locution” and “illocution,” respectively). The literal meaning is the statement’s “propositional content”; it is what the speaker says. The use-meaning is the intention of the speaker in making the utterance; it is what a speaker means. For example, if someone says “I stand for the national anthem,” the literal meaning of the utterance is that they stand when the national anthem plays. However, the speaker may intend to convey that she is patriotic.

We can use this distinction and the distinction between a Jewish conspiracy and a conspiracy by Jews to develop an account of a certain kind of anti-Semitic trope. On this account, a statement expresses this kind of anti-Semitic trope only if it purports to refer to some conspiracy or effort by Jewish persons to control or subvert some institution, and the speaker means to refer to a Jewish conspiracy or effort, and not just a conspiracy or effort by Jews. Anti-Semitic tropes, then, are in this case the products of both the literal and the use-meaning of statements.

Furthermore, I propose that the ethical status of utterances that fulfill the content requirement for being an anti-Semitic trope is critically dependent upon their use-meaning. A person can non-culpably utter a statement with the same content as an anti-Semitic trope if she did not intend to suggest a Jewish conspiracy and could not reasonably have foreseen that it was anti-Semitic, or if she took adequate, good-faith measures to make it understood that she was not intending to suggest a Jewish conspiracy. As with other kinds of wrongdoing, culpability increases with the degree to which the literal anti-Semitism of the utterance was known to or intended by the speaker. Nevertheless, a harsher, “strict liability” regime for utterances with the same content as anti-Semitic tropes would unduly restrict political discourse, such as criticism of Jewish donors to progressive causes.

That said, Kass deserves the criticism he has received. On the one hand, the fact that conservatives have argued that Kass did not use anti-Semitic tropes on the grounds that he did not intend to posit a Jewish conspiracy supports my contention that anti-Semitic tropes are products of both literal and use-meaning. On the other hand, one could reasonably believe that Kass did intend to posit a Jewish conspiracy. Kass must be aware that conspiracy theories specifically revolving around George Soros are circulated widely by open anti-Semites, and he seems to place undeserved emphasis on the contribution of this particular wealthy Jewish businessman to progressive political causes in a political system in which multimillion-dollar campaign contributions are not at all infrequent.

Given these facts, the best we can say for Kass is that he was negligent in his use of language with the same content as anti-Semitic tropes: he should have known that claiming George Soros is responsible for clandestine funding of progressive causes dovetails with anti-Semitic propaganda, and he should have done something to allay concerns that he intended to suggest a Jewish conspiracy. Moreover, although I believe that those who decry “cancel culture” have legitimate concerns, Kass’s claim that he is a victim of it is a good example of powerful people crying “censorship!” when they encounter criticism. If strong criticism is deserved, a person in a free society must bear its costs.

Moral Distinctions between Crisis Capitalizers

photograph of hands exchanging a one hundred dollar banknote

After announcing an expectation-exceeding fiscal quarter, Apple CEO Tim Cook unabashedly stated that despite “uncertain times, this performance is a testament to the important role our products play in our customers’ lives and to Apple’s relentless innovation.” Cook’s statement comes amidst waves of criticism due to Big Tech’s apparent invincibility to the pandemic-driven economic crisis. News of unprecedented profits and increasing stock shares is often juxtaposed to highlight the giant disparity between a few lucky corporations and billionaires with the majority of Americans and businesses financially suffering. While some activists and politicians have proclaimed the gross immorality of those continuing to profit during the pandemic, there has been little discussion surrounding the moral distinction between these so called “crisis capitalizers.” Is there something inherently immoral about profiting from the pandemic? And is there significant moral distinction between profiting off of a crisis rather than profiting during it?

The sheer inequality of the current economy might be enough to argue that anyone still profiting during this time has an obligation to help those suffering. Since March, 51 million Americans have filed for unemployment. The United States just reported its worst economic drop ever recorded in one quarter, with US GDP collapsing by 32.9%. To make matters worse, over 40 million Americans are at risk for eviction, after relief programs expired on July 31 with no economic safety net in place. Meanwhile, the profits of businesses and individuals in the tech industry have been soaring. The Guardian reported that Amazon’s profit over the past quarter was $5.2 billion, shockingly higher than this time last year. Both Facebook’s and Apple’s quarterly revenue have also exceeded projections. This stark economic inequality between the majority of Americans and the “1%” might be immoral in itself. When one considers the fact that these very same corporations have notoriously been exempt from paying taxes in recent history, it is troubling to witness their general lack of action in contributing to alleviating the economic crisis fueled by the pandemic. In terms of the billionaire individuals at the helm of these tech corporations, only about 1 in 10 billionaires worldwide have verifiably contributed monetary donations to COVID-19 relief efforts. The moral obligation for wealthy individuals to act is even more pertinent considering the underwhelming response to both the health and economic crisis by the United States government.

However, others have argued that profiting during the pandemic is not immoral, and should in fact be celebrated as a sign of adaptation and resilience. In an article on Medium, Bloomberg Beta head Roy Bahat argues that it is important to acknowledge the difference between taking advantage of the pandemic and adapting to it. He considers it “okay, even noble for businesses to thrive right now,” since companies are “helping to keep people employed.” Bahat makes a point, especially considering the fact that the US is facing one of its worst unemployment crises in history.

Is there something to be said for businesses’ ability to adapt? Many independent entrepreneurs have adapted to the crisis such as those on popular small business website Etsy, which is projected to double its quarterly income. The success of Etsy has also meant the success of “anyone with creativity and 20 cents.” Additionally, the profits of many of the thriving corporations are the product of their sales revenue, reflecting consumption of products. One might argue that if we all stopped buying products from Amazon and Apple, and stopped using Facebook, perhaps the revenue of these organizations would not have increased during the pandemic. If we consider current corporate profits are simply a reflection of consumer demand, it appears as though these companies have adapted to fill an economic niche. In fact, some might argue these very corporations are actually filling a crucial role in ensuring access to necessities during a crisis.

While the morality of profiting during the pandemic might be considered up for debate, can the same be said for profiting off of the pandemic? Profiting off of a crisis can be considered immoral from many different ethical perspectives. While profiting during the pandemic clearly requires the participation in an inequitable economy, profiting off of the pandemic could be considered more sinister in both its proximity to the crisis itself and in its willingness to use suffering for personal gain. If one believes healthcare to be a human right, the entire concept of private industry profiting off of medical assistance is potentially immoral. Those who identify strongly with Kantian ethics would be most disturbed by instances during the pandemic where access to goods and services is limited to use people’s need to survive as a means to an end. Profiting off of crisis also is potentially immoral because it might showcase inherent selfishness, which reflects a corrupted internal character. Lastly, profiting from the pandemic might even be considered wrong on consequentialist grounds if withholding goods and services necessary for survival leads to further sickness and economic suffering. This is especially true if individuals are unable to afford/access healthcare to treat or prevent COVID, or if an individual/company is attempting to profit off of phony healthcare products which arguably endanger the general public.

Alternatively, some might argue that though the medical/pharmaceutical industry is directly profiting from the pandemic, it is more consequentially balanced than its critics paint it to be. Take the widespread demand for masks for instance. Masks have been prescribed by public health officials to slow the spread of coronavirus, yet masks remain a for-profit industry. While some politicians, like Senator Bernie Sanders, have argued masks should be subsidized and free for all, there remains a giant market for masks — both from large medical companies and small independent businesses using their skills to make decorative or high-end masks. From a utilitarian perspective, the market for medical supplies to combat COVID is promoting good as widespread mask use has been projected to save lives. Cheap medical masks are largely accessible, protecting those who wear them and profiting the companies that sell them. Higher end customizable ones arguably encourage more people to wear them and give satisfaction both to those selling them and those buying them. Similarly, pharmaceutical companies profiting off of test kits — and potentially vaccines in the future — could be argued to be a net positive (from a consequentialist perspective) if these products are largely available to the general public. Perhaps it is especially important that private industry take a role in developing a vaccine for COIVD-19 considering the fact that most medical research in the US is already privately funded.

Perhaps even more complicated are those who are less clearly profiting directly off of the pandemic. Large department stores, like Walmart and Target, sell PPE and important cleaning products, but also reportedly resulted in an uptick in sales items related to quarantine and work-from-home. For Target, the sales bump in April actually exceeded its typical holiday profits. Whether or not to categorize such organizations and profits within the “profit off of” or “profit during” category depends largely on what types of products we consider necessities, how those goods get marketed to consumers, and who benefits.

Whether or not we believe profiting during the pandemic is immoral depends largely on if we interpret these profits to reflect adaptation or exploitation. And that perception likely rests on whether or not we believe the sale of those products tends to produce the most good for the most amount of people. Our answers to these questions go a long way in determining if we truly believe there exists a moral distinction between these “crisis capitalizers.”

Back to School: America’s Uncontrolled and Unethical Experiment

photograph of middle school science clasroom

As of this writing, several school districts in the United States have already reopened at some level, but most of the nation’s 40 million school-age children are scheduled to return sometime from mid to late August. One major argument for the reopening is so parents can return to work (assuming there is a job to go to), and help rebuild America’s faltering economy. The American Academy of Pediatrics has also supported this back-to-school movement, though this support concentrates on the emotional and social needs of the students that can be better met by returning to school.

There is, however, one argument against going back to school that few consider: Going back to school amid an epidemic is America’s uncontrolled experiment using our children as the sample. Even the nation’s top epidemiologist, Anthony Fauci, told teachers in a recent interview: “You’ll be part of the experiment in reopening schools.” This experiment is neither scientific, nor ethical.

We scientists live in a world of unknowns, and we traverse that world through the use of the scientific method and research ethics. The controlled scientific experiment goes like this: (1) A research question is formulated when the researcher makes the best “guess” as to what to expect from the data to be collected; this “guess” is based on what is already known about the topic, (2) a sample of people is identified that will participate in the experiment with as little risk to them as possible, (3) variables are identified which, as much as reasonably can be, are controlled for, (4) after considering any risks, and obtaining consent to participate from the sample members, the experiment is run, (5) the data are collected, (6) analyzed, and (7) conclusions are drawn. Through this controlled and ethical study, hopefully we find some answers that can be used to solve the problem at hand. Of utmost importance, however, is that these steps must be accomplished within the boundaries of research ethics. In the field of healthcare, these are typically four in number.

The four basic ethical considerations when doing research in the public health and healthcare arenas in general are (1) autonomy, or the power to make an informed, uncoerced, freely given consent to participate in the research; (2) justice, assuring a fair distribution of risks, benefits, and resources over participants, (3) beneficence, that no harm is done; and, (4) nonmaleficence, keeping participants from harmful situations. These ethical considerations came about after WWII when atrocities of uncontrolled experiments on human subjects by the Nazi regime were discovered. These considerations are now guides in designing ethical research. By carefully adhering to the scientific method and ethical principles of research, controlled experiments can be carried out.

Unfortunately, none of these guidelines are being met in the uncontrolled experiment America is about to run on its children when they go back to school this fall. The assumption is that getting students back in school will help solve the economic problem as well as the social and psychological problems the nation’s children are facing. These are important problems, and there are ethical ways of addressing them; the uncontrolled experiment on which America is embarking is not one of them.

If we compare this uncontrolled experiment with an ethically-sound controlled experiment, we can see the many pitfalls; pitfalls that may have dire consequences for all involved.

First of all, there is no research question. There is only a hope that things go OK and not too many get hurt. We don’t have enough information about the virus and its effect on children to even formulate a research question. What are we looking for and hoping to find? In essence, we are saying, “Let’s reopen schools, get the economy going, and help meet students’ social and emotional needs,” inferring that this is the only avenue open to us to accomplish these goals.

Secondly, variables such as the age, race, and gender of students, teachers, school staff, and bus drivers — along with their underlying medical conditions — are just some of many variables that are difficult, if not impossible, to control for in the school environment. Even when good-faith attempts are made to control for some of these variables, several ethical problems emerge.

One example is school transportation. The average school bus occupancy is 56; if social distancing without masking is practiced, only 6 students can ride the bus; if masking alone is practiced, only 28 can ride. It costs districts about $1000 per pupil per year to transport students to and from school. Additional costs to the districts by adding routes and making more trips to get students to school using either masking or social distancing, will be a strain on precious resources that could be spent on helping students with the ability to use remote learning.

Additionally, many states have regulations that mandate only students who live beyond a one-mile radius of the school they attend can ride a bus. Others must walk, ride their bikes, or use public or private transportation. Assuming that the family can afford public transportation, or, has a car, lives in a neighborhood that is safe for walking, and has weather that cooperates, these options work. However, marginalized children who live within this one-mile radius (and are thus not candidates for school transportation) may be further marginalized — kept from the emotional and social contacts they need and potentially missing vital instructional activities. These concerns are further complicated when we think about special needs students, whose medical vulnerabilities might put them at-risk in these new school environments.

Thirdly, the sample used (children) is a protected one. The Office of Human Research Protection (OHRP) identifies several protected populations that deserve special consideration when they are involved in research using humans. Pregnant women, prisoners, those with lessened cognitive abilities, and children are a few examples. Extra precautions must be taken to assure these subjects are not simply being used with little protection from specific harms that may come. Children are not mature enough to make their own decisions as to whether they want to participate in a research project. They seldom, if ever, are even allowed to make their own medical decisions. Children have no say in whether they want to go back to school amid a pandemic projected to have taken the lives of more than 180,000 in our nation by the end of August. We are sending this protected group back to school blindly, with few safety precautions. We also know that when schools were closed statewide during the months of March through May, there was a temporal association with decreased COVID-19-related deaths in those states.

Fourthly, how will we be able to keep the participants (children, faculty and staff, bus drivers) from harm? Masking and social distancing can be practiced at school; however, some age groups will be better at that than others. The benefits and risks involved are not spread evenly over the sample of students. Not only the students are at risk, but teachers are, as well.

Education Week recently reported that as many as 1.5 million public school teachers are at a higher risk of contracting COVID-19 due to their underlying health problems. The research on school staff vulnerability is sparse, but, given the law of large numbers, many staff members are at high risk as well when in a building of several hundred children. Children do get COVID-19, and with 5.5 million children suffering from asthma alone this could be a disaster waiting to happen. When race is taken into account, African-American children are 2.5 times as likely to contract COVID-19 as are Caucasian children, and American Indian and Hispanic children are 1.5 times as likely. Schools may be breeding grounds for transmitting the virus to these vulnerable populations. Children have more of the COVID-19 virus in their noses and throats than do adults, which makes children just as likely to spread the disease. They may not get the disease as easily as adults, but they do transmit it just as easily.

Do the benefits of returning to school (and there are many) outweigh the associated costs of spreading the disease?

There are many reasons other than academic ones for children needing to be in school. We know that at least 14 million children do not get enough to eat on a daily basis and this is dependent on race; 30% of these children are Black and 25% are Hispanic, less than 10% are Caucasian. Additionally, when children are home for extended periods of time with adults, the probability of child abuse increases. Yet, during this summer, schools found a way to deliver lunches, if not breakfast also to their students who were in need of that service.

Some local municipality police departments and county sheriffs have instituted a “Drop By” program. In these programs, homes where abuse may be more likely to occur, are irregularly visited as a “Drop By” to see how things are going and if anyone needs anything. During the visits law enforcement officers are able to get a feel for any evidence of domestic violence doing so in a non-threatening and non-accusatory manner.

School attendance both mediates and moderates the potential problems of food insecurity and abuse. But, as seen with programs such as outlined, there are other ways to ameliorate these injustices to our children. A re-allocation of dollars is needed along with creative ways to supply the needed services that children and families need during this pandemic. Sending kids back to school under the current implementation is not the solution. The potential nonmonetary costs are not worth the benefits that may accrue by returning to school under the present conditions.

Eventually, we will have to come to terms with the outcomes of this uncontrolled experiment. Will we have learned that it was a bad idea? That there should have been more planning to control for the safety and well-being for all at school? That we should have controlled for transportation safety? That dollars should have been reallocated for technology and given to those without it for remote learning? That home visits by school personnel to aid those experiencing difficulty learning remotely would have been worth the money?

Is America prepared to deal with the outcomes of this uncontrolled experiment where children are the sample? Neither science nor the ethics of research accept the premise of “we’ll do it and then see what happens.” But uncontrolled experiments do just that at the peril of those who are participants in unethical, uncontrolled experiments. America sits poised to conduct such a trial.

The Short- and Long-Term Ethical Issues of Working from Home

photograph of an empty office looking out over city

The COVID-19 pandemic has resulted in a shift of working habits as almost half of the U.S. workforce now are able to work from home. This has led many to ask whether it might be the beginning of a more permanent change in working habits. Such a move carries significant ethically-salient benefits and drawbacks regarding urban development, climate change, mental health, and more.

Despite the apparent novelty of the idea of having many permanently working from home, this was the norm for most people for most of human civilization. It was the industrial revolution and 18th and 19th century reforms to travel which encouraged our need for a separate place of work. Two hundred years ago most people in North America lived and worked on farms. Artisans making textiles and other goods largely working from home. The steam engine allowed for a centralized location that could allow for efficient mass production. Early industrial production even still relied on the “putting-out system,” where centralized factories would make goods and then subcontract the finishing work on the item to people who worked from home. In other words, the concept of “going to work” everyday is a relatively recent invention in human history.

This change had many far-reaching effects. The need to be close to work resulted in urbanization. In the United States the population who lived in urban areas between 1800 and 1900 jumped from 6% to 40%. Urban development and infrastructure followed suit. Artisans who once worked for a price for their goods now worked for a wage for their time. Work that was once governed by sunlight became governed by the clock. Our political and social norms all changed as a result in ways that affect us today. It’s no surprise, for instance that during this time, as employees began working together in a common area, that the first labour unions were formed. Returning to the working habits of our ancestors could have similarly profound effects that are difficult to imagine today, however there are several morally-salient factors that we can identify in a 21st century context.

There are several moral advantages of having more people work from home rather than going to work every day. Working from home during COVID is obviously a move directed at minimizing the spread of the virus. However, permanently working from home also permanently reduces the risk of spreading other infections in the workplace, particularly if it involves less long-distance travel. Approximately 14 million workers in the United Stated are employed in occupations where exposure to disease or infection occurs at least once per week. Reducing physical interaction in the workplace and thereby minimizing infections within it can improve productivity.

In addition, less people going to work means less commuting. 135 million Americans commute to work. Avoiding commute could save an employee up to thousands of dollars per year. The shift has secondary effects as well; less commuting means less wear and tear to public infrastructure like roads and highways and less congestion in urban areas. This is helpful because new infrastructure projects are having a hard time keeping up the increases in traffic congestion. Such changes may also help with tackling climate change, since 30% of U.S. greenhouse gas emissions are for transportation.

On the other hand, it’s possible that working from home could be more harmful to the climate. Research from WSP UK shows that remote working in the UK may only be helpful in the summer. They found that environmental impacts could be higher in the winter due to the need to heat individual buildings instead of a single office building which can be more efficient. In other words, the effect on climate change may not be one-sided.

Working from home can also be less healthy. For example, the concept of the sick day is heavily intertwined with the idea of going to a workplace. The temptation may be to abolish the concept of the sick day with the reasoning being that the whole point of a sick day is to stay home and avoid making co-workers sick. However, even if one can work from home our bodies need rest. Workplace experts have found that those who work from home tend to continue to work during a sickness and this may lengthen recovery time, lead to burn-out, and ultimately lead to less productivity. It can also be unhealthy to develop an “always on” mentality where the line between work and home becomes blurred. According to a recent Monster survey, 51% of Americans admitted to experiencing burnout while working from home as the place of rest and relaxation merges with the place of work. This may have the effect of increasing the number of mental health problems in the workplace while simultaneously making them more physically isolated from fellow workers.

Another potential downside centers on the employer-employee relationship. For example, working from home permanently allows employees to reside in areas where the cost of living is cheaper. This may mean salary reductions since a business will now have a larger pool of potential employees to choose from and thus can offer lower, but still competitive, salaries in areas where the cost of living is cheaper. Facebook has already made moves in this direction. This means job searches will become more competitive and this could drive down salaries even lower. At the same time, large offices will not be needed, and larger urban areas may find decreased economic activity and a drop in the value of office buildings.

The shift also means that an employer is able to infringe on the privacy of home life. Employers are now tracking employees at home to ensure productivity, with software able to track every word typed, GPS location, and even to use a computer’s camera. In some cases, these features can be enabled without an employee even knowing they are being monitored. This will only exacerbate a long-standing ethical concern over privacy in the 21st century.

Finally, it is morally important to recognize that shifting to working from home on a large scale could have disproportionate effects on different communities and different job sectors. The service sector may struggle in areas that no longer have heavy workplace congestion. Also, plumbers and electricians cannot work from home so there are certain industries that literally cannot move in that direction completely. Service industries are often segregated by race and gender, thus ensuring that any of the opportunities enjoyed by working from home will not be equitably shared. It also means that disruptions in these industries caused by the shifting working habits of others could be disproportionately felt.

A permanent shift towards remote working habits carries certain specific moral concerns that will need to be worked out. Whether it will lead to more productivity, less productivity, a greater carbon footprint, a smaller carbon footprint, and so on, will depend on the specific means used and on the new work habits we adopt over the course of time as new laws, policies, and regulations are formulated, tested, and reformed. In the long term, however, the most significant ethical impacts could be the radical social changes it may cause. The shift from working from home to working at work dramatically changed society in less than a century, and the shift back may do the same.