Back to Prindle Institute

Organ Donors and Imprisoned People

photograph of jail cell with man's hands hanging past bars

Should people who are in prison – even on death row – be allowed to donate their organs? Sally Satel has recently made the case. After all, there is a “crying need” for organs, with people dying daily because they do not receive a transplant. But, as Satel points out, the federal prison system does not allow for posthumous donations and limits living donations to immediate family members.

Imprisoned people, whether they want to donate a kidney whilst alive or all their organs after an execution, are rarely able to do so.

There seem to be a couple of practical justifications for this. For one, it might interfere with the date of execution; secondly, the prison system might have to bear some of this cost. I want to address these two issues before moving on to some of the other ethical issues involved.

It’s important to see that the actual date of execution has no ethical significance – it is not a justice-driven consideration. If it turns out that an execution is delayed two weeks to enable a kidney transplant, so what? Executions are delayed by stays all the time, and if there is some good to come out of changing the date then keeping it fixed doesn’t seem particularly important.

Secondly, there may well be costs to the prison system in, say, medical care for a patient who has donated a kidney (or for the removal of organs post-execution). But the prison system is part of the state. Given there is a nationwide shortage of organs, we might expect the state to play a role in addressing this, and if it has to bear some cost, why should it matter that the prison system – not the health system – must pay? After all, the criminal justice system is meant to help broader society. (That is not to mention that there might be other ways of funding these transplants that don’t increase costs for the prison system.)

There are further explanations for why states do not permit donations. Christian Longo – who sits on death row in Oregon for murdering his wife and children – asked to posthumously donate his organs and was told that the drugs used in executions destroy the organs. But Longo points out that other states use drugs that do not cause such destruction. Still, the specific drugs used in executions brings up an ethical concern: how painful these drugs are is not clear, and there seem to be some incredibly distressing executions.

Fiddling around with these drug cocktails in order to ensure the viability of organs may introduce major risks to the condemned.

Longo asked to donate his organs, so too did Shannon Ross, who is serving a long prison sentence. The fact that people are requesting to donate means that there seems to be more than mere consent here, there is an eagerness to donate. But this might hide some deeper worries, and to see this we need to investigate why inmates wish to donate.

We might also worry that Longo wants to get some “extra privileges” or to somehow improve his own situation. Perhaps an appeals or parole board would look more favorably upon somebody who has given up a kidney. But that doesn’t seem to be the case for Longo, who is resigned to death (though he has not yet been killed, Oregon has a moratorium in place). Yet others might volunteer to donate in the mistaken belief that this will help their case. This might make the expressed consent less voluntary than it seems, since they don’t fully understand the risks and benefits of what they are consenting to.

And this leads to what I think the most difficult moral issue here is: whether prisoners can autonomously consent. Longo points out that consent can sometimes be exploited: prisoners in the 60s and 70s were paid to volunteer for “research into the effects of radiation on testicular cells.”

That, even if it is seemingly voluntary, is unacceptable – prisoners are in a vulnerable position and we shouldn’t exploit them for medical research.

Both for prisoners who will be released and those on death row, I think we can find a useful parallel with cases of voluntary euthanasia. The key similarity is that both are in a desperate situation and are offered a chance that seems to help them improve their position.

David Velleman, for example, poses this challenge to defenders of voluntary euthanasia: perhaps even offering somebody the choice to die is coercive. To simplify a very complex argument, if someone thinks they might be a drain on their family, then offering them the chance to be euthanized might not actually help them do what they would autonomously choose. They want to carry on living, and they regret that this burdens their family. But once confronted with the option to die, they are called upon to provide a justification for continued existence and might, then, feel compelled to take an option they might otherwise not. And we can see how a prisoner on death row might similarly feel compelled to donate – lacking a suitable justification to refuse – once confronted by the choice.

In addition to these concerns about mistaken beliefs and the coerciveness of choice, there might be another deep temptation to donate. Longo notes that he has little opportunity to give back to society in any way – a society that he recognizes he has wronged and harmed. Giving away his organs seems to be a way of giving back. Donation, then, provides a way of atoning, if only to a limited extent.

The worry here is that the prospect of atonement is a bit like the worry of being a burden on your family.

When you’re given the option – donate your organs in the one case, end your life in another – this prospect burns too brightly.

It might be that the prospect of atonement blots out an individual’s proper concern with, say, their own future health (or, if they are on death row, with objections they might have to organ donation).

Yet I think that – powerful and troubling as this concern might be – this is only a worry. In offering his argument, Velleman notes that he isn’t opposed to a right to die, just that this is a (perhaps defeasible) argument against an institutional right to die. Likewise, the argument in our domain only goes so far. Many people have no objection to organ donation, so there is no such concern that they, if on death row, are making the wrong choice for themselves. Plenty of people who are under no pressure at all choose to donate a kidney – why can’t we allow prisoners to make that choice, too?

If we worry too much about the possibility of letting prisoners make a bad choice, we might be paternalistic and also take away from them the free choice to selflessly help others.

The Ethics of Manipulinks

image of computer screen covered in pup-up ads

Let’s say you go onto a website to find the perfect new item for your Dolly Parton-themed home office. A pop-up appears asking you to sign up for the website’s newsletter to get informed about all your decorating needs. You go to click out of the pop-up, only to find that the decline text reads “No, I hate good décor.”

What you’ve just encountered is called a manipulink, and it’s designed to drive engagement by making the user feel bad for doing certain actions. Manipulinks can undermine user trust and are often part of other dark patterns that try to trick users into doing something that they wouldn’t otherwise want to do.

While these practices can undermine user trust and hurt brand loyalty over time, the ethical problems of manipulinks go beyond making the user feel bad and hurting the company’s bottom line.

The core problem is that the user is being manipulated in a way that is morally suspect. But is all user manipulation bad? And what are the core ethical problems that manipulinks raise?

To answer these questions, I will draw on Marcia Baron’s view of manipulation, which lays out different kinds of manipulation and identifies when manipulation is morally problematic. Not all manipulation is bad, but when manipulation goes wrong, it can reflect “either a failure to view others as rational beings, or an impatience over the nuisance of having to treat them as rational – and as equals.”

On Baron’s view, there are roughly three types of manipulation.

Type 1 involves lying to or otherwise deceiving the person being manipulated. The manipulator will often try to hide the fact that they are lying. For example, a website might try to conceal the fact that, by purchasing an item and failing to remove a discount, the user is also signing up for a subscription service that will cost them more over time.

Type 2 manipulation tries to pressure the person being manipulated into doing what the manipulator wants, often transparently. This kind of manipulation could be achieved by providing an incentive that is hard to resist, threatening to do something like ending a friendship, inducing guilt trips or other emotional reactions, or wearing others down through complaining or other means.

Our initial example seems to be an instance of this kind, as the decline text is meant to make the user feel guilty or uncomfortable with clicking the link, even though that emotion isn’t warranted. If the same website or app were to have continual pop-ups that required the user to click out of them until they subscribed or paid money to the website, that could also count as a kind of pressuring or an attempt to wear the user down (I’m looking at you, Candy Crush).

Type 3 manipulation involves trying to get the person to reconceptualize something by emphasizing certain things and de-emphasizing others to serve the manipulator’s ends. This kind of manipulation wants the person being manipulated to see something in a different light.

For example, the manipulink text that reads “No, I hate good décor” tries to get the user to see their action of declining the newsletter as an action that declines good taste as well. Or, a website might mess with text size, so that the sale price is emphasized and the shipping cost is deemphasized to get the user to think about what a deal they are getting. As both examples show, the different types of manipulation can intersect with each other—the first a mix of Types 2 and 3, the second a mix of Types 1 and 3.

These different kinds of manipulation do not have to be intentional. Sometimes user manipulation may just be a product of bad design, perhaps because there were unintentional consequences of a design that was supposed to accomplish another function or perhaps because someone configured a page incorrectly.

But often these strategies of manipulation occur across different aspects of a platform in a concerted effort to get users to do what the manipulator wants. In the worst cases, the users are being used.

In these worst-case scenarios, the problem seems to be exactly as Baron describes, as the users are not treated as rational beings with the ability to make informed choices but instead as fodder for increased metrics, whether that be increased sales, clicks, loyalty program signups, or otherwise. We can contrast this with a more ethical model that places the user’s needs and autonomy first and then constructs a platform that will best serve those needs. Instead of tricking or pressuring the user to increase brand metrics, designers will try to meet user needs first, which if done well, will naturally drive engagement.

What is interesting about this user-first approach is that it does not necessarily reduce to considerations of autonomy.

A user’s interests and needs can’t be collapsed into the ability to make any choices on the platform that they want without interference. Sometimes it might be good to manipulate the user for their own good.

For example, a website might prompt a user to think twice before posting something mean to prevent widespread bullying. Even though this pop-up inhibits the user’s initial choice and nudges them to do something different, it is intended to act in the best interest of both the user posting and the other users who might encounter that post. This tactic seems to fall into the third type of manipulation, or getting the person to reconceptualize, and it is a good example of manipulation that helps the user and appears to be morally good.

Of course, paternalism in the interest of the user can go too far in removing user choice, but limited manipulation that helps the user to make the decisions that they will ultimately be happy with seems to be a good thing. One way that companies can avoid problematic paternalism is by involving users at different stages of the design process to ensure that user needs are being met. What is important here is to treat users as co-deliberators in the process of developing platforms to best meet user needs, taking all users into account.

If the user finds that they are being carefully thought about and considered in a way that takes their interests into account, they will return that goodwill in kind. This is not just good business practice; it is good ethical practice.

The Ethics of AI Behavior Manipulation

photograph of server room

Recently, news came from California that police were playing loud, copyrighted music when responding to criminal activity. While investigating a stolen vehicle report, video was taken of the police blasting Disney songs like those from the movie Toy Story. The reason the police were doing this was to make it easier to take down footage of their activities. If the footage has copyrighted music, then a streaming service like YouTube will flag it and remove it, so the reasoning goes.

A case like this presents several ethical problems, but in particular it highlights an issue of how AI can change the way that people behave.

The police were taking advantage of what they knew about the algorithm to manipulate events in their favor. This raises obvious questions: Does the way AI affects our behavior present unique ethical concerns? Should we be worried about how our behavior is adapting to suit an algorithm? When is it wrong to use one’s understanding of an algorithm as leverage to their own benefit? And, if there are ethical concerns about algorithms having this effect on our behavior should they be designed in ways to encourage you to act ethically?

It is already well-known that algorithms can affect your behavior by creating addictive impulses. Not long ago, I noted how the attention economy incentivizes companies to make their recommendation algorithms as addictive as possible, but there are other ways in which AI is altering our behavior. Plastic surgeons, for example, have noted a rise in what is being called “snapchat dysmorphia,” or patients who desperately want to look like their snapchat filter. The rise of deepfakes are also encouraging manipulation and deception, making it more difficult to tell reality apart from fiction. Recently, philosophers John Symons and Ramón Alvarado have even argued that such technologies undermine our capacity as knowers and diminishes our epistemic standing.

Algorithms can also manipulate people’s behavior by creating measurable proxies for otherwise immeasurable concepts. Once the proxy is known, people begin to strategically manipulate the algorithm to their advantage. It’s like knowing in advance what a test will include and then simply teaching the test. YouTubers chase whatever feature, function, length, or title they believe the algorithm will pick up and turn their video into a viral hit. It’s been reported that music artists like Halsey are frustrated by record labels who want a “fake viral moment on TikTok” before they will release a song.

This is problematic not only because viral TikTok success may be a poor proxy for musical success, but also because the proxies in the video that the algorithm is looking for also may have nothing to do with musical success.

This looks like a clear example of someone adapting their behavior to suit an algorithm for bad reasons. On top of that, the lack of transparency creates a market for those who know more about the algorithm and can manipulate it to take advantage of those that do not.

Should greater attention be paid to how algorithms generated by AI affect the way we behave? Some may argue that these kinds of cases are nothing new. The rise of the internet and new technologies may have changed the means of promotion, but trying anything to drum up publicity is something artists and labels have always done. Arguments about airbrushing and body image also predate the debate about deepfakes. However, if there is one aspect of this issue that appears unique, it is the scale at which algorithms can operate – a scale which dramatically affects their ability to alter the behavior of great swaths of people. As philosopher Thomas Christiano notes (and many others have echoed), “the distinctive character of algorithmic communications is the sheer scale of the data.”

If this is true, and one of the most distinctive aspects of AI’s ability to change our behavior is the scale at which it is capable of operating, do we have an obligation to design them so as to make people act more ethically?

For example, in the book The Ethical Algorithm, the authors present the case of an app that gives directions. When an algorithm is considering the direction to give you, it could choose to try and ensure that your directions are the most efficient for you. However, by doing the same for everyone it could lead to a great deal of congestion on some roads while other roads are under-used, making for an inefficient use of infrastructure. Alternatively, the algorithm could be designed to coordinate traffic, making for a more efficient overall solution, but at the cost of potentially getting personally less efficient directions. Should an app cater to your self-interest or the city’s overall best-interest?

These issues have already led to real world changes in behavior as people attempt to cheat the algorithm to their benefit. In 2015, there were reports of people reporting false traffic accidents or traffic jams to the app Waze in order to deliberately re-route traffic elsewhere. Cases like this highlight the ethical issues involved. An algorithm can systematically change behavior, and just like trying to ease congestion, it can attempt to achieve better overall outcomes for a group without everyone having to deliberately coordinate. However, anyone who becomes aware of the system of rules and how they operate will have the opportunity to try to leverage those rules to their advantage, just like the YouTube algorithm expert who knows how to make your next video go viral.

This in turn raises issues about transparency and trust. The fact that it is known that algorithms can be biased and discriminatory weakens trust that people may have in an algorithm. To resolve this, the urge is to make algorithms more transparent. If the algorithm is transparent, then everyone can understand how it works, what it is looking for, and why certain things get recommended. It also prevents those who would otherwise understand or reverse engineer the algorithm from leveraging insider knowledge for their own benefit. However, as Andrew Burt of the Harvard Business Review notes, this introduces a paradox.

The more transparent you make the algorithm, the greater the chances that it can be manipulated and the larger the security risks that you incur.

This trade off between security, accountability, and manipulation is only going to become more important the more that algorithms are used and the more that they begin to affect people’s behaviors. Some outline of the specific purposes and intentions of an algorithm as it pertains to its potential large-scale effect on human behavior should be a matter of record if there is going to be public trust. Particularly when we look to cases like climate change or even the pandemic, we see the benefit of coordinated action, but there is clearly a growing need to address whether algorithms should be designed to support these collective efforts. There also needs to be greater focus on how proxies are being selected when measuring something and whether those approximations continue to make sense when it’s known that there are deliberate efforts to manipulate them and turned to an individual’s advantage.

Unions and Worker Agency

photograph of workers standing together, arms crossed

The past few years have seen a resurgence of organized labor in the United States, with especially intense activity in just the past few months. This includes high profile union drives at Starbucks, Amazon, the media conglomerate Condé Nast, and even MIT.

Parallel to this resurgence is the so-called “Great Resignation.” As the frenetic early days of the pandemic receded into the distance, workers began quitting at elevated rates. According to the Pew Research Center, the three main reasons for quitting were low pay, a lack of opportunity for advancement, and feeling disrespected. Former U.S. Secretary of Labor Robert Reich even analogized it to a general strike, in which workers across multiple industries stop work simultaneously.

Undoubtedly, the core cause of both the Great Resignation and growing organized labor are the same – dissatisfaction with working conditions – but they are also importantly different. The aim of quitting is to leave the workplace, the aim of unions and strikes are to change it. They do this by trying to shift the balance of power in the workplace and give more voice and agency to workers.

Workplaces are often highly hierarchical with orders and direction coming down from the top, controlling everything from mouse clicks to uniforms. This has even led some philosophers, like the noted political philosopher Elizabeth Anderson, to refer to workplaces as dictatorships. She contends that the workplace is a blind spot in the American love for democracy, with the American public confusing free markets with free workers, despite the often autocratic nature of the workplace. Managers may hold almost all the power in the workplace, even in cases where the actual working conditions themselves are good.

Advocates of greater workplace democracy emphasize “non-domination,” or that at the very least workers should be free from arbitrary exercises of managerial power in the workplace. While legal workplace regulations provide some checks on managerial power, the fact remains that not everything can or should be governmentally regulated. Here, worker organizations like unions can step in. This is especially important in cases where, for whatever reasons, workers cannot easily quit.

Conversations about unionization generally focus on wages and benefits. Unions themselves refer to the advantage of unionization as the “union difference,” and emphasize the increases in pay, healthcare, sick leave, and other benefits compared to non-unionized workplaces. But what causes this difference? Through allowing workers to bargain a contract with management, unions enable workers to be part of a typically management-side discussion about workplace priorities. Employer representatives and union representatives must sit at the same table and come to some kind of agreement about wages, benefits, and working conditions. That is, for good or for ill, unions at least partially democratize the workplace – although it is far from full workplace democracy, in which workers would democratically exercise managerial control.

Few would hold that, all things being equal, workers should not have more agency in the workplace. More likely, their concern is either that worker collectives like unions come at the cost of broader economic interests, or that unions specifically do not secure worker agency but in fact saddle workers with even more restrictions.

The overall economic effect of unions is contentious, but there is little evidence that they hobble otherwise productive industries. A 2019 survey of hundreds of studies on unionization found that while unionization did lower company profits, it did not negatively impact company productivity and decreased overall societal inequality.

More generally, two assumptions must be avoided. The first is that the interests of the workers are necessarily separate from the interests of the company. No doubt company interests do sometimes diverge from union interests, but at a minimum unionized workers still need the company to stay in business. This argument does not apply to public sector unions (government workers), but even there, unions can arguably lead to more invested workers and stronger recruitment.

The second assumption to avoid is that management interests are necessarily company interests. Just as workers may sometimes pursue their personal interests over broader company interest, so too can management. This concern is especially acute when investment groups, like hedge funds, buy a company. Their incentive is to turn a profit on their investment, whether that is best achieved by the long-term health of the company or by selling it for parts. Stock options were historically proposed as a strategy to tie the personal compensation of management to the broader performance of a company. This strategy is limited however, as what it does more precisely is tie management compensation to the value of stock, which can be manipulated in various ways, such as stock buybacks.

Beyond these economic considerations, a worker may also question whether their individual agency in the workplace is best represented by a union. Every organization is going to bring some strictures with it, and this can include union regulations and red tape. The core argument on behalf of unions as a tool for workplace agency is that due to asymmetries of power in the workplace, the best way for workers to have agency is collective agency. This is especially effective for goals that are shared widely among workers, such as better pay. Hypothetically, something like a fully democratic workplace (or having each individual worker well positioned to be part of company decision making) would be better for worker agency than unions. The question of whether these alternatives would work is more practical than ethical.

There can be other tensions between individual and collective agency. In America specifically, unions have been viewed as highly optional. The most potent union relationship is a “closed shop,” in which a union and company agree to only hire union workers. Slightly less restrictive is a “union shop,” under which all new workers must join the union. Both are illegal in the United States under the 1947 Taft Hartley Act, which restricted the power of unions in several ways. State-level  “right to work” laws go even further, forbidding unions from negotiating contracts that automatically deduct union representation fees from employees. The argument is one of personal freedom – that if someone is not in the union they should not have to pay for it. The challenge is that the union still has to represent this individual, who benefits from the union they are not paying for. This invites broader questions about the value of individual freedoms, and how they must be calibrated with respect to the collective good.

 

The author is a member of Indiana Graduate Workers Coalition – United Electrical Workers, which is currently involved in a labor dispute at Indiana University Bloomington.

Incentivizing the Vaccine-Hesitant

photograph of covid vaccination ampoules

Since the beginning of the COVID-19 pandemic, vaccine hesitancy has remained a constant concern. Given expectations that a vaccine would be found, experts always anticipated the problem of convincing those who distrust vaccines to actually get inoculated. A great many articles coming from the major news outlets have aimed at addressing the problem, discussing vaccine hesitancy and, in particular, trying to determine the most promising strategy for changing minds. In The Atlantic, Olga Khazan surveys some of the methods that have been proposed by experts. Attempts to straightforwardly correct misinformation seems to have proven ineffective as they can cause a backfire effect where individuals cling to their pre-existing beliefs even more strongly. Others instead suggest that a dialectical approach might be more successful. In The Guardian, Will Hanmer-Lloyd argues that we should refrain from blaming or name-calling vaccine-hesitant individuals or “post on social media about how ‘idiotic’ people who don’t take the vaccine are” because “it won’t help.” Similar to this “non-judgmental” approach that Hanmer-lloyd recommends, Erica Weintraub Austin, Professor and Director of the Edward R. Murrow Center for Media & Health Promotion Research at Washington State University, and Porismita Borah, Associate Professor at Washington State University, in The Conversation propose talking with vaccine-hesitant people and avoiding “scare-tactics.” Among the things that can help is providing “clear, consistent, relevant reasons” in favor of getting vaccinated while at the same time discussing what constitutes a trustworthy source of information in the first place.

In spite of all these good suggestions, to this day, Pew Research reports that only 60% of Americans would probably or definitely get a vaccine against COVID-19. Though confidence has been on the rise since September, this still leaves a concerning 40% unlikely to pursue vaccination. It is perhaps in light of these facts that a recent proposal is beginning to gain traction: incentivizing people by offering prizes. Ben Welsh of the LA Times reports that the rewards proposed include “Canary home security cameras, Google Nest entertainment systems, Aventon fixed-gear bicycles and gift cards for Airbnb and Lyft.”

But is it right to give out prizes to lure the initially unwilling to seek vaccination?

The answer depends on the moral system to which you subscribe. You might think that given the seriousness of the current circumstances it is especially crucial to get as many folks vaccinated as possible, and that the means of accomplishing this task are of secondary importance. This would be a consequentialist view according to which the moral worth of an action depends on the outcomes it produces. One might feel the force of this line of argument even more when considering that the consequences of vaccine hesitancy can carry dangers not only for the individuals refusing to get vaccinated but for the rest of us as well. Just recently, a Wisconsin pharmacist purposefully made unusable 57 vials of vaccine that could have been used to vaccinate up to 500 people because of a belief they were unsafe. So considering how significant the impact of vaccine-distrust can be, it is understandable that one might employ even unusual methods – such as prizes – to convince those who remain reluctant to join the queue.

On the other hand, if you do not feel the force of this outcome-based argument, you might think that there is something to say about the idea that changing people’s behavior does not necessarily change people’s beliefs. In this sense, offering a prize might not do much to alleviate the distrust they feel towards vaccination or the government. Consider another example. Suppose you do not believe that exercising is good. Yet your best friend, who instead does believe in the positive aspects of exercising, convinces you to go running with her because the view from the hill where she runs is stunning. In that sense, you may eventually elect to go running, but you will not do it because you are now a believer in exercising. You will go running just so that you can admire the view from the hill, without having changed your beliefs about exercise.

What is the problem of not changing people’s beliefs? You might be tempted to think that there is no problem, if you believe that the end result is all that matters. But even in that case, it is beliefs that drive our actions, and so as long as individuals still believe that vaccines are not to be trusted, giving out prizes will only be a marginal and temporary solution that fails to address the deeper, underlying issue. The worry is that someone who may opt to get vaccinated upon receiving a gift card is not deciding to get vaccinated for the right kind of reason. This argument picks out a distinction famously known in philosophy between right versus wrong kinds of reasons. The philosophical debate is complex, but, in general, when it comes to believing something, only epistemic, evidence-based reasons represent good reasons for actions. Should one, instead, come to act on the basis of reasons that have more to do with, say, wishes or desires, those would represent the proper kinds of reasons.

So what is the solution here? Well, there is no solution, as is often the case when it comes to philosophical positions that are fundamentally at odds with one another. But here is the good news: looking at the ways in which real life events connect with philosophical issues can help us figure out what we think. Examining issues in this way can prove useful in isolating the features that may help us understand our own particular commitments and convictions. Thinking through these tensions for ourselves is what allows us to decide whether we think the proposal to encourage vaccination efforts by offering prizes is a legitimate one.

Corporate Responsibility and Human Rights: DNA Data Collection in Xinjiang

photograph of Uighur gathering

Since 2006 China has engaged in a large-scale campaign of collecting DNA samples, iris images, and blood types in the province of Xinjiang. In 2016, a program under the name “Physicals for All” was used to take samples of everyone between ages of 12 to 65 in a region home to 11 million Uighurs. Since the beginning of the program, it has been unclear whether the patients were at any point “informed of the authorities’ intention to collect, store, or use sensitive DNA data,” raising serious questions about the consent and privacy of the patients. The authorities largely characterized the program as providing benefits for the relatively economically poor region, with a stated goal: “to improve the service delivery of health authorities, to screen and detect for major diseases, and to establish digital health records for all residents.” Often accompanying program coverage were testimonies describing life-saving diagnostics due to this program. Despite being officially voluntary, some program participants described feeling pressured to undergo the medical checks. The Guardian reported numerous stories in local newspapers that encouraged officials to convince people to participate

Once a person decided to participate and medical information had been taken from them, the information was stored and linked to the individual’s national identification number. Certainly, questions concerning the coercive and secretive nature of the campaign arise as the government is collecting a whole population’s biodata, including DNA, under the auspices of a free healthcare program. In addition, this is a gross violation of human rights, which requires the free and informed consent of patients prior to medical interventions. The case is especially troublesome as it pertains to Uighurs, a Muslim minority that has been facing pressures from China since the early 20th century, when they briefly declared independence. China is holding around million Uighurs in “massive internment camps,” which China refers to as “re-education camps” (see Meredith McFadden’s “Uighur Re-education and Freedom of Conscience” for discussion). According to The New York Times, several human rights groups and Uighurs pointed to the fact that Chinese DNA collection may be used “to chase down any Uighurs who resist conforming to the campaign.” 

To be able to ensure the success of this campaign police in Xinjiang bought DNA sequencers from the US company Thermo Fisher Scientific. When asked to respond to the apparent misuse of their DNA sequencers, the company said that they are not responsible for the ways the technology they are producing is being used, and that they expect all their customers to act in accordance with appropriate regulation. Human Rights Watch has been vocal in demanding responsibility from Thermo Fisher Scientific, claiming that the company has a responsibility to avoid facilitating human rights violations, and that the company has an obligation to investigate misuse of their products and potentially suspend future sales.

Should transnational actors, especially those providing technology such as Thermo Fisher Scientific, have a moral responsibility to cease sale of their product if it is being used for “immoral” purposes? One could claim that a company that operates in a democratic country, and is therefore required to follow certain certain ethical guidelines, should act to enforce those same guidelines among their clientele. Otherwise they are not actually abiding by our agreed-upon rules. Other positions may demand the company’s moral responsibility on the basis of obligations that companies have to society. These principles are often outlined in company’s handbooks, and used to keep them accountable. These often stem from convictions about intrinsic moral worth or the duty to do no harm.

On the other hand, others may claim that a company is not responsible for the use to which others put their goods. These companies’ primary duty is to their shareholders; they are profit-driven actors which have an obligation to pursue that which is most useful to itself, and not the broader community. They operate in a free-market economy that ought not be constrained simply as a matter of feasibility. As Thermo Fisher Scientific notes, “given the global nature of [their] operations, it is not possible for [them] to monitor the use or application of all products [they’ve] manufactured.” It may be that a company should only be expected to abide by the rules of the country it operates in, with the expectation that all customers “act in accordance with appropriate regulations and industry-standard best practices.”