← Return to search results
Back to Prindle Institute

Can Machines Be Morally Responsible?

photograph of robot in front of chalkboard littered with question marks

As artificial intelligence becomes more advanced, we find ourselves relying more and more on the decision-making of neural nets and other complex AI systems. If the machine can think and decide in ways that cannot be easily traced back to the decision of one or multiple programmers, who do we hold responsible if, for instance, the AI decision-making reflects the biases and prejudices that we have as human beings? What if someone is hurt by the machine’s discrimination?

To answer this question, we need to know what makes someone or something responsible. The machine certainly causes the processing it performs and the decisions it makes, but is the AI system a morally responsible agent?

Could artificial intelligence have the basic abilities required to be an appropriate target of blame?

Some philosophers think that the ability that is core to moral responsibility is control or choice. While sometimes this ability is spelled out in terms of the freedom to do otherwise, let’s set aside questions of whether the AI system is determined or undetermined. There are some AI systems that do seem to be determined by fixed laws of nature, but there are others that use quantum computing and are indeterminate, i.e., they won’t produce the same answers even if given the same inputs under the same conditions. Whether you think that determinism or indeterminism is required for responsibility, there will be at least some AI systems that will fit that requirement. Assume for what follows that the AI system in question is determined or undetermined, according to your philosophical preferences.

Can some AI systems exercise control or engage in decision-making? Even though AI decision-making processes will not, as of this moment, directly mirror the structure of decision-making in human brains, AI systems are still able to take inputs and produce a judgment based on those inputs. Furthermore, some AI decision-making algorithms outcompete human thought on the same problems. It seems that if we were able to get a complex enough artificial intelligence that could make its own determinations that did not reduce to its initial human-made inputs and parameters, we might have a plausible autonomous agent who is exercising control in decision-making.

The other primary capacity that philosophers take to be required for responsibility is the ability to recognize reasons. If someone couldn’t understand what moral principles required or the reasons they expressed, then it would be unfair to hold them responsible. It seems that sophisticated AI can at least assign weights to different reasons and understand the relations between them (including whether certain reasons override others). In addition, AI that are trained on images of a certain medical condition can come to recognize the common features that would identify someone as having that condition. So, AI can come to identify reasons that were not explicitly plugged into them in the first place.

What about the recognition of moral reasons? Shouldn’t AI need to have a gut feeling or emotional reaction to get the right moral answer?

While some philosophers think that moral laws are given by reason alone, others think that feelings like empathy or compassion are necessary to be moral agents. Some worry that without the right affective states, the agent will wind up being a sociopath or psychopath, and these conditions seem to inhibit responsibility. Others think that even psychopaths can be responsible, so long as they can understand moral claims. At the moment, it seems that AI cannot have the same emotional reactions that we do, though there is work to develop AI that can.

Do AI need to be conscious to be responsible? Insofar as we allow that humans can recognize reasons unconsciously and that they can be held responsible for those judgments, it doesn’t seem that consciousness is required for reasons-recognition. For example, I may not have the conscious judgment that a member of a given race is less hard-working, but that implicit bias may still affect my hiring practices. If we think it’s appropriate to hold me responsible for that bias, then it seems that consciousness isn’t required for responsibility. It is a standing question as to whether some AI might develop consciousness, but either way, it seems plausible that an AI system could be responsible at least with regard to the capacity of reasons-recognition. Consciousness may be required for choice on some models, though other philosophers allow that we can be responsible for automatic, unconscious, yet intentional actions.

What seems true is that it is possible that there will at some point be an artificial intelligence that meets all of the criteria for moral responsibility, at least as far as we can practically tell. When that happens, it appears that we should hold the artificial intelligence system morally responsible, so long as there is no good reason to discount responsibility — the mere fact that the putative moral agent was artificial wouldn’t undermine responsibility. Instead, a good reason might look like evidence that the AI can’t actually understand what morality requires it to do, or maybe that the AI can’t make choices in the way that responsibility requires. Of course, we would need to figure out what it looks like to hold an AI system responsible.

Could we punish the AI? Would it understand blame and feel guilt? What about praise or rewards? These are difficult questions that will depend on what capacities the AI has.

Until that point, it’s hard to know who to blame and how much to blame them. What do we do if an AI that doesn’t meet the criteria for responsibility has a pattern of discriminatory decision-making? Return to our initial case. Assume that the AI’s decision-making can’t be reduced to the parameters set by its multiple creators, who themselves appear without fault. Additionally, the humans who have relied on the AI have affirmed the AI’s judgments without recognizing the patterns of discrimination. Because of these AI-assisted decisions, several people have been harmed. Who do we hold responsible?

One option would be to have there be a liability fund attached to the AI, such that in the event of discrimination, those affected can be compensated. There is some question here as to who would pay for the fund, whether that be the creators or the users or both. Another option would be to place the responsibility on the person relying on the AI to aid in their decision-making. The idea here would be that the buck stops with the human decision-maker and that the human decision-maker needs to be aware of possible biases and check them. A final option would be to place the responsibility on the AI creators, who, perhaps without fault created the discriminatory AI, but took on the burden of that potential consequence by deciding to enter the AI business in the first place. They might be required to pay a fine or take measures to retrain the AI to avoid the discrimination in the first place.

The right answer, for now, is probably some combination of the three that can recognize the shared decision-making happening between multiple agents and machines. Even if AI systems become responsible agents someday, shared responsibility will likely remain.

Liability and Luck

photograph of lone firefighter standing before small wildfire blaze

In the unlikely event that you have not yet experienced your daily dose of despair concerning the fate of humanity, then I’d highly encourage you to read Elizabeth Weil’s ProPublica piece “They Know How to Prevent Megafires. Why Won’t Anybody Listen?” The article makes two basic points. 1) Extensive controlled burns would be an effective precautionary strategy that would prevent recurring megafires. 2) There are political and financial incentives which trap us into a reactionary rather than precautionary fire strategies.

There are clearly lots of perverse incentives at play, but one part of the article was especially interesting:

“How did we get here? Culture, greed, liability laws and good intentions gone awry. There are just so many reasons not to pick up the drip torch and start a prescribed burn even though it’s the safe, smart thing to do. . . . Burn bosses in California can more easily be held liable than their peers in some other states if the wind comes up and their burn goes awry. At the same time, California burn bosses typically suffer no consequences for deciding not to light. No promotion will be missed, no red flags rise. ‘There’s always extra political risk to a fire going bad,’ Beasley said. ‘So whenever anything comes up, people say, OK, that’s it. We’re gonna put all the fires out.'”

It is risky to engage in controlled burns. Things can go wrong, and when they do go wrong it could be pretty bad, someone could lose their home, maybe even lose their life. Of course, it is far riskier, in one sense, to not engage in controlled burns. So why, then, our incentives set up the way they are?

At least two different explanations are likely at play.

Explanation 1: Action vs Inaction. First, in general, we are more responsible for actions than for inactions. The priest who ‘passed by the other side’ of a man left for dead did something terrible, but did not do something as terrible as the thieves who beat the man up in the first place. As a society we jail murders, we don’t jail the charitably apathetic, even if the apathetic are failing to save lives they could save.

And indeed, this point does have an appropriate corollary when talking about fire suppression. I am not responsible for houses burning in California — this is true even though last spring I could have bought a plane ticket, flown to California, and started burning stuff. Had I done so, likely things would have gone terribly wrong, and in that case I really would have been responsible for whatever property I had destroyed. This seems appropriate, it could be catastrophic if my incentives were structured such that I was punished for not starting vigilante fires.

Elizabeth Anscombe gives us a similar example. If the on-duty pilot and I are both asleep in our cabins, then we are doing the very same thing when our ship hits an iceberg. Yet it was the pilot, and not I, who sunk the ship. Indeed, had I, a random passenger, had tried to navigate the ship we would have absolutely held me responsible when something goes wrong.

So, what is the principle here? Is it that amateurs are specially responsible for actions? No, because we can also identify cases where we indemnify amateurs for their actions. Perhaps the best example here is good Samaritan laws. These laws protect untrained people, like myself, if we make a mistake when trying to render emergency first aid.

What is really going on is that we don’t want passengers trying to navigate ships. Nor do we want aspiring philosophers attempting unsupervised controlled burns in California. But we do want pilots to navigate ships, and we do want burn bosses attempting controlled burns. As such, we should construct incentives which encourage that, and protect people from culpability even if things occasionally go wrong.

Explanation 2: Causal Links. Second, we trace responsibility through causality. Because you caused a house to burn down you are, at least partially, responsible for that damage. The problem is, it is almost always easier to trace causality to actions than to inactions. We can identify exactly which active burning causes damage. We can easily say, “the first you started on February 14th destroyed these two house.” It’s much harder to say “the not burning that you didn’t do on February 14th was what allowed the fire to get out of hand.”

And indeed, I think probably we can’t really hold people responsible for any particular failure to burn. We can hold people responsible for how much controlled burning they can do in general, but we can’t trace causal paths to hold them responsible for any particular bad result of inaction. Indeed, it would be unfair to do so, no burn boss can’t foresee when a particular failure to burn will destroy a house (in the way they can sometimes foresee when burning in a particular area might destroy a house). This creates a problem though. Because we can’t hold people fully responsible for their inaction, that means we must hold people disproportionately responsible for actions, thus perversely incentivizing inaction.

This also parallels our interpersonal lives. For example, we generally want people willing to think for themselves. But we are also far more likely to condemn people for reaching terrible views they came up with themselves than for failing to recognize what is wrong with the conventional view. This can create perverse incentives, however. It might really be true that we are justly responsible for coming to terrible conclusions, but because it is so hard to hold people responsible for the majority view it might be important to forgive even egregious mistakes to keep incentives favoring original thought.

So here is the general point. Assessing responsibility is far more complicated than just establishing whether someone played a causal role. Sometimes holding people responsible for things they really should not have done can create perversely disincentivize people from taking risks we want them willing to take. The fires in California give one clear example of this, but the point generalizes to our lives as well.

Against Abstinence-Based COVID-19 Policies

black-and-white photograph of group of students studying outside

There are at least two things that are true about abstinence from sexual activity:

  1. If one wishes to avoid pregnancy and STD-transmission, abstinence is the most effective choice, and
  2. Abstinence is insufficient as a policy category if policy-makers wish to effectively promote pregnancy-avoidance and to prevent STD-transmission within a population.

I take it that (1) is straightforward: if someone wishes to avoid the risks of an activity (including sex), then abstention from that activity is the best way to do so. By (2), I simply mean that prescribing abstinence from sexual activity (and championing its effectiveness) is often not enough to convince people to actually choose to avoid sex. For example, the data on the relative effectiveness of various sex-education programs is consistent and clear: those programs that prioritize (primarily or exclusively) abstinence-only lessons about sex are regularly the least effective programs for actually reducing teen pregnancies and the like. Instead, pragmatic approaches to sex education that comprehensively discuss abstinence alongside topics like contraceptive-use are demonstrably more effective at limiting many of the most negative potential outcomes of sexual activity. Of course, some might argue in response that, even if they are less effective, abstinence-only programs are nevertheless preferable on moral grounds, given that they emphasize moral decision-making for their students.

It is an open question whether or not policy-makers should try to impose their own moral beliefs onto the people affected by their policies, just as it is debatable that good policy-making could somehow produce good people, but the importance of policy making based on evidence is inarguable. And the evidence strongly suggests that abstinence-based sex education does not accomplish the goals typically laid out by sex education programs. Regarding such courses, Laura Lindberg — co-author of a 2017 report in the Journal of Adolescent Health on the impact of “Abstinence-Only-Until-Marriage” (AOUM) sex ed programs in the US — argues that such an approach is “not just unrealistic…[but]…violates medical ethics and harms young people.”

In this article, I’m interested less in questions of sex education than I am in questions of responsibility for the outcomes of ineffective public policies. I think it’s uncontroversial to say that, in many cases of pregnancy, the people most responsible for creating a pregnancy (that results from sexual activity) are the sexual partners themselves. However, it also seems right to think that authority figures who knowingly enact policies that are severely unlikely to effectively prevent some undesirable outcome carry at least some responsibility for that resulting outcome (if it’s true that the outcome would have probably been prevented if the officials had implemented a different policy). I take it that this concern is ultimately what fuels both Lindberg’s criticism of AOUM programs and the widespread support for comprehensive sex-education methods.

Consider now the contemporary situation facing colleges and universities in the United States: despite the persistent spread of the coronavirus pandemic over the previous several months, many institutions of higher education have elected to resume face-to-face instruction in at least some capacity this fall. Across the country, university administrators have developed intricate policies to ensure the safety and security of their campus communities that could, in theory, prevent a need to eventually shift entirely to remote instructional methods. From mask mandates to on-campus testing and temperature checks to limited class sizes to hybrid course delivery models and more, colleges have demonstrated no shortage of creativity in crafting policies to preserve some semblance of normalcy this semester.

But these policies are failing — and we should not be surprised that this is so.

After only a week or two of courses resuming, many campuses (and the communities surrounding them) are already seeing spikes of COVID-19 cases and several universities have already been forced to alter their previous operating plans in response. After one week of classes, the University of North Carolina at Chapel Hill abruptly decided to shift to fully-remote instruction for the remainder of the semester, a decision mirrored by Michigan State University, and (at least temporarily, as of this writing) Notre Dame and Temple University. Others like the University of Iowa, the University of South Carolina, and the Ohio State University have simply pushed ahead with their initial plans, regardless of the rise in positive cases, but the feasible longevity of such an option is bleak. Indeed, as the semester continues to progress, it seems clear that many more colleges will be disrupted by a mid-semester shift, regardless of the policies that they had previously developed to prevent one.

This is, of course, unsurprising, given the realities of life on a college campus. Dormitories, dining halls, and Greek life houses are designed to encourage social gatherings and interactions of precisely the sort that coronavirus-prevention recommendations forbid. Furthermore, the expectations of many college students (fueled explicitly by official university marketing techniques) is that such social functions are a key element of the “college experience.” (And, of course, this is aggravated all the more by the general fearlessness commonly evidenced by 18-25 year-olds that provoke them into generally more risky behavior than other age groups.) Regardless of how many signs are put up in classrooms reminding people to wear masks and no matter the number of patronizing emails sent to chastise students (or local businesses) into “acting responsibly,” it is, at best, naive of university administrators to expect their student bodies to suddenly enter a pandemic-preventing mindset (at least at the compliance rates that would be necessary to actually protect the community as a whole).

Basically, on the whole, colleges have pursued COVID-19-prevention policies based on the irrational hope that their students would exercise precisely the sort of abstinence that college administrators know better than to expect (and, for years leading up to this spring, actively discouraged). As with abstinence-based sex education, two things are true here also:

  1. If one wishes to avoid spreading the coronavirus, constantly wearing masks, washing hands, and avoiding social gathering are crucial behavioral choices, and
  2. Recommending (and even requiring upon pain of punishment) the behaviors described in (1) is insufficient as a policy category if university administrators wish to effectively prevent the spread of the coronavirus on their campuses.

We are already seeing the unfortunate truth of (2) grow more salient by the day.

And, as with sex education, on one level we can rightfully blame college students (and their choices to attend parties or to not wear masks) for these outbreaks on college campuses. But the administrators and other officials who insisted on opening those campuses in the first place cannot sensibly avoid responsibility for those choices or their consequences either. Just as with abstinence-only sex education programs, it seems right to hold officials responsible for policies whose implementation is wildly unlikely, no matter how effective those fanciful policies might be if people were to just follow the rules.

This seems especially true in this case given the (in one sense) higher stakes of the COVID-19 pandemic. Because the coronavirus is transmitted far more quickly and easily than STDs or pregnancies, it is even more crucial to create prevention strategies that are more likely to be successful; in a related way, it also makes tracking responsibility for the spread of the virus far more complicated. At least with a pregnancy, one can point to the people who chose to have sex as shouldering much of the responsibilty for the pregnancy itself; with COVID-19, a particular college student could follow every university policy perfectly and, nevertheless, contract the virus by simply coming into contact with a classmate who has not. In such a case, it seems like the responsible student can rightfully blame both her irresponsible classmate and the institution which created the conditions of her exposure by insisting that their campus open for business while knowingly opting for unrealistic policies.

Put differently: imagine how different sex education might look like if you could “catch” a pregnancy literally just by walking too close to other people. In such a world, simply preaching “abstinence!” would be even less defensible than it already is; nevertheless, that approach is not far from the current state of many COVID-19-prevention policies on college campuses. The only thing this kind of rhetoric ultimately protects is the institution’s legal liability (and even that is up for debate).

In early July, the University of Southern California announced that it would offer no in-person classes for its fall semester, electing instead for entirely remote course-delivery options. At the time, some responded to this announcement with ridicule, suggesting that it was a costly overreaction. Nevertheless, USC’s choice to ensure that its students, staff, and faculty be protected by barriers of distance has meant not only that its semester has been able to proceed as planned, but that the university has not been linked to the same level of case spikes as other institutions (though, even with such a move, outbreaks are bubbling).

As with so much about the novel coronavirus, it remains to be seen what the full extent of its spread will look like. But one thing is clear already: treating irresponsible college students as scapegoats for poorly-conceived policies that justified the risky move of opening (or mostly-opening) campuses is transparently wrong. It oversimplifies the complicated relationship of policy-makers and constituents, even as it misrepresents the nature of moral responsibility for public action, particularly on the part of those in charge. The adults choosing to attend college parties are indeed to blame for doing so, but those parties wouldn’t be happening at all if other adults had made different choices about what this semester was going to look like in the first place.

In short, if college administrators really expect abstinence to be an effective tool to combat COVID-19, then they should be the ones to use it by canceling events, closing campuses, and wrapping up this semester (and, potentially, the next) online.

Universities and the Burdens of Risk

photograph of exterior of classroom building at Harvard

To bring students back to the university is to knowingly expose them to risk of a dangerous disease when such exposure is avoidable. This is morally objectionable on a variety of fronts. The risk of contracting COVID-19 and the seriousness of its potential health outcomes make it very different from the realities we typically accept by engaging in other everyday activities. It is clear that COVID-19 poses a higher risk of death than other coronaviruses. A variety of underlying conditions can lead to deadly outcomes, and we do not have a comprehensive understanding of the conditions that may lead to the virus’ lethality. Even when the virus does not cause death, the respiratory impact of contracting it has put a significant burden on patient’s long term health, and can lead to the need for hospitalization and incubation for breathing support. The long-term effects of the illness even for those lucky enough to avoid these outcomes are still unknown, but appear to persist past initial recovery and seem to include lung damage and potential stroke and brain complications.

One of the most concerning things, given these serious outcomes of the virus, is how contagious it is. Because of this, there have been efforts to distance members of societies affected by the virus across the globe (with the US notoriously falling behind).

Despite the serious issues involved in contracting the virus, in order to keep society safe and healthy, multiple segments of society need to continue to interact with one another and the public. There are, of course risks for pharmacists, doctors, grocery store workers, and the essential workers that produce and distribute the necessary products that keep a society running.

When there are necessary risks, there is a responsibility of a society to support those people that are exposing themselves to these risks on the behalf of the members of society that require their services to continue in health and safety. When someone takes on a burden in order to keep you safe and healthy, we typically think either a moral obligation is formed, or, more minimally, it would be appropriate to be grateful, or, in compromise, to be obligated not to put people in a scenario where they must accrue further risks in order to maintain your safety.

We can consider non-pandemic scenarios that support these intuitions. An extreme case would if you chose to sky-dive (knowingly taking on a risk) with a tandem guide. The partner is jumping with you, exposing themselves to risk to keep you safe. As a beginner, you rely on the tandem partner for your safety. It would be morally wrong of you to act so as to put the tandem partner at further risk.

In circumstances where others are placing themselves at risk for your benefit and you knowing accept that relationship, it is wrong to exacerbate that exchange of burdens (their risk) and benefits (the service they are offering at their sacrifice).

This minimization of risk exposure supports the narrowing of operations and functioning of businesses activities in our society until we can mitigate the risk to one another that gathering together would pose. By opening your doors for business, you are posing a risk to your employees, and by frequenting the business, you are posing a risk to the fellow patrons and employees of the business. With risks like those associated with COVID-19, this threat is significant enough that such behavior, when it is avoidable and unnecessary, is morally problematic.

When a group of people comes together for activities like taking a cruise, or attending a university, the moral assessment of risk is different than for these essential operations. Universities expose students, faculty, and staff to a high risk of contracting the disease because, like cruise ships, the amount of personnel required to keep food, board, courses, and administration functioning is immense and it all occurs in relatively small areas, every day. These are specialized activities that are voluntary, and so they significantly differ from the necessary operations that provide food and services to a society to keep people safe and healthy.

Universities have acknowledged the liability issues in the Fall, perhaps most obviously by seeking legal shields or waivers from students returning to campus. However, at the end of May, according to a survey conducted by the Chronicle of Higher Education, over 2/3 of universities planned to bring students back to campus for the upcoming term. This strategy attempts to redirect the institutional burdens of risk assessment and decision-making back onto individuals.

This parallels the situation where individual businesses are placed when there is a lack of governmental or higher legislation regarding managing risk. Without a policy dictating when it is permissible to have non-essential services enter back into the risk-exchange of societal functioning, individual businesses are left to weigh the risk of their employees, their impact on societal spread, etc. Government oversight makes the decision on the basis of overall risk that society faces, which is the level at which the risk of disease exists. When individuals need to determine what risks they are willing to bear against other priorities, their choices become coercive – the cost of businesses closing due to lack of government assistance, the pressure to open when other businesses are doing so and thereby losing competition in the market, etc. By changing the systemic problem of the risk to society into individual problems of how to navigate that risk based on individual priorities, privileges, and disadvantages, we face structural injustice.

Universities face this very problem in determining the just distribution of systemic risk. Should they pursue universal policies to protect everyone regardless of privileges, priorities, or disadvantages, or should they leave individuals to navigate these decisions themselves. Giving individuals the opportunity to choose a remote-learning track does not mitigate the moral burden of universities offering face-to-face (on-campus) learning. In offering this choice, universities have simply transformed an institutional obstacle into a problem for individuals to navigate on their own. But this choice offered individuals cannot be read as an assumption of risk; these choices are not commensurable. University systems were designed for those able and willing to opt for on-campus, f2f learning, signaled by the university to be optimal.

The instructors that have opted for f2f learning have created a difference in course delivery that puts a burden on students who would ideally choose not to return to campus in their selection of courses. The disparity in support services that are optimally delivered while on-campus would also create distance between those students who return and those that cannot or would choose to avoid the risk of returning to campuses that admit the risk to which they are exposing all present.

A statement from the American Anthropological Association emphasized how the default f2f policies undermined the equitable access to education that would result for minority and underserved populations:

“Given the disproportionate representation of COVID-19 infection and death in Black and brown communities, university policies and practices that emphasize in-person work and teaching run the risk of compounding the impact of racial inequity. These policies also risk endangering already-marginalized members of university communities, including staff and contingent faculty, who are less likely to have the option to take time away from work. As a matter of equity and ethics, while we acknowledge the financial challenges colleges and universities face because of the pandemic, we encourage university administrators to keep the health and safety of marginalized people at the forefront of their decisions.”

Finally, there is the question of liability on the part of universities for allowing students back on their campuses. As noted above, some universities are seeking “liability shields” for the health risks facing their students, staff, and instructors this Fall. Despite taking precautions against the contagious virus, there are no foolproof measures that can be taken against contracting this illness, especially at a campus with students living, eating, and studying in such close proximity. It is difficult to imagine such a group acting in ways outside of the classroom that would significantly reduce the spread of the virus when research has shown that, among the young, this disease has not been taken very seriously since its very onset.

But these failings do not absolve the universities of liability for what happens on their campus. What students do in their lives has a different legal status than what they do in sanctioned activities and conditions condoned by an institution. Further, by acknowledging the likelihood of risky behavior on the part of students, a university also acknowledges that it is putting staff and instructors at greater risk than if they did not return to campus.

There is a legal and moral responsibility to provide a working environment that is safe to employees. The risk of contracting this virus is significant, due to its rate of contagion and health outcomes. With this risk of contracting a serious illness, and the coercive environment created by the justice issues raised above, universities do not satisfy this condition of safe work environments by having students, staff, and instructors return to campus. At a time when we have a moral obligation to behave in ways to mitigate the spread of the virus, or at the very least not exacerbate its spread, 2/3 of universities are taking steps to actively put students, staff, and instructors in positions that make them more vulnerable to contracting and spreading this illness.

Corporate Responsibility and Human Rights: DNA Data Collection in Xinjiang

photograph of Uighur gathering

Since 2006 China has engaged in a large-scale campaign of collecting DNA samples, iris images, and blood types in the province of Xinjiang. In 2016, a program under the name “Physicals for All” was used to take samples of everyone between ages of 12 to 65 in a region home to 11 million Uighurs. Since the beginning of the program, it has been unclear whether the patients were at any point “informed of the authorities’ intention to collect, store, or use sensitive DNA data,” raising serious questions about the consent and privacy of the patients. The authorities largely characterized the program as providing benefits for the relatively economically poor region, with a stated goal: “to improve the service delivery of health authorities, to screen and detect for major diseases, and to establish digital health records for all residents.” Often accompanying program coverage were testimonies describing life-saving diagnostics due to this program. Despite being officially voluntary, some program participants described feeling pressured to undergo the medical checks. The Guardian reported numerous stories in local newspapers that encouraged officials to convince people to participate

Once a person decided to participate and medical information had been taken from them, the information was stored and linked to the individual’s national identification number. Certainly, questions concerning the coercive and secretive nature of the campaign arise as the government is collecting a whole population’s biodata, including DNA, under the auspices of a free healthcare program. In addition, this is a gross violation of human rights, which requires the free and informed consent of patients prior to medical interventions. The case is especially troublesome as it pertains to Uighurs, a Muslim minority that has been facing pressures from China since the early 20th century, when they briefly declared independence. China is holding around million Uighurs in “massive internment camps,” which China refers to as “re-education camps” (see Meredith McFadden’s “Uighur Re-education and Freedom of Conscience” for discussion). According to The New York Times, several human rights groups and Uighurs pointed to the fact that Chinese DNA collection may be used “to chase down any Uighurs who resist conforming to the campaign.” 

To be able to ensure the success of this campaign police in Xinjiang bought DNA sequencers from the US company Thermo Fisher Scientific. When asked to respond to the apparent misuse of their DNA sequencers, the company said that they are not responsible for the ways the technology they are producing is being used, and that they expect all their customers to act in accordance with appropriate regulation. Human Rights Watch has been vocal in demanding responsibility from Thermo Fisher Scientific, claiming that the company has a responsibility to avoid facilitating human rights violations, and that the company has an obligation to investigate misuse of their products and potentially suspend future sales.

Should transnational actors, especially those providing technology such as Thermo Fisher Scientific, have a moral responsibility to cease sale of their product if it is being used for “immoral” purposes? One could claim that a company that operates in a democratic country, and is therefore required to follow certain certain ethical guidelines, should act to enforce those same guidelines among their clientele. Otherwise they are not actually abiding by our agreed-upon rules. Other positions may demand the company’s moral responsibility on the basis of obligations that companies have to society. These principles are often outlined in company’s handbooks, and used to keep them accountable. These often stem from convictions about intrinsic moral worth or the duty to do no harm.

On the other hand, others may claim that a company is not responsible for the use to which others put their goods. These companies’ primary duty is to their shareholders; they are profit-driven actors which have an obligation to pursue that which is most useful to itself, and not the broader community. They operate in a free-market economy that ought not be constrained simply as a matter of feasibility. As Thermo Fisher Scientific notes, “given the global nature of [their] operations, it is not possible for [them] to monitor the use or application of all products [they’ve] manufactured.” It may be that a company should only be expected to abide by the rules of the country it operates in, with the expectation that all customers “act in accordance with appropriate regulations and industry-standard best practices.”

Establishing Liability in Artificial Intelligence

Entrepreneur Li Kin-kan is suing over “investment losses triggered by autonomous machines.” Raffaele Costa convinced Li to let K1, a machine learning algorithm, manage $2.5 billion—$250 million of his own cash and the rest leverage from Citigroup Inc. The AI lost a significant amount of money in a decision that the company claims it wouldn’t have made if it was as sophisticated as they had been led to believe. Because of the autonomous decision-making structure of K1, trying to locate appropriate liability is a provocative question: is the money-losing decision the fault of K1, its designers, Li, or, as Li alleges, the salesman who made claims about K1’s potential?  

Developed by Austria-based AI company 42.cx, the supercomputer named K1 would “comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.”

Our current laws are designed to assign responsibility on the basis of intention or ability to predict an injury. Algorithms do neither, but are being put to more and more tasks that can produce legal injuries in novel ways. In 2014, the Los Angeles Times published an article that carried the byline: “this post was created by an algorithm written by the author.” The author of the algorithm, Ken Schwencke, allowed the code to produce a story covering and earthquake, not an uncommon event around LA, so tasking an algorithm to produce the news was a time-saving strategy. However, journalism by code can lead to complicated libel suits, as legal theorists discussed when Stephen Colbert used an algorithm to match Fox News personalities with movie reviews from Rotten Tomatoes. Though the claims produced were satire, there could have been a case for libel or defamation, though without a human agent as the direct producer of the claim: “The law would then face a choice between holding someone accountable for a result she did not specifically intend, or permitting without recourse what most any observer would take for defamatory or libelous speech.”

Smart cars are being developed that can cause physical harm and injury based on the decisions of their machine learning algorithms. Further, artificial speech apps are behaving in unanticipated ways: “A Chinese app developer pulled its instant messaging “chatbots”—designed to mimic human conversation—after the bots unexpectedly started criticizing communism. Facebook chatbots began developing a whole new language to communicate with each other—one their creators could not understand.”

Consider: machine-learning algorithms accomplish tasks in ways that cannot be anticipated in advance (indeed, that’s why they are implemented – to do creative, not purely scripted work); and thus they increasingly blur the line between person and instrument, for the designer did not explicitly program how the task will be performed.

When someone directly causes injury, for instance by causing bodily harm with their body, it is easy to isolate them as the cause. If someone stomps on your foot, this could cause a harm. According to the law, then, they can be held liable if they have the appropriate mens rea, or guilty mind. For instance, if they intended to cause that injury, knowingly caused the injury, recklessly caused the injury, or negligently caused the injury.

This structure for liability seems to work just as well if the person in question used a tool or instrument. If someone uses a sledgehammer to break your foot, they still are isolated as the cause (as the person moving the sledgehammer around), and can be held liable depending on what their mental state was regarding the sledgehammer-hitting-your-foot (perhaps it was a non-culpable accident). Even if they use a  complicated Rube Goldberg Machine to break your foot, the same structure seems to work just fine. If someone uses a foot-breaking Rube Goldberg Machine to break your foot, they’ve caused you an injury, and depending on their particular mens rea will be liable for some particular legal violation.

Machine learning algorithms put pressure on this framework, however, because when they are used it is not to produce a specific result in the way the Rube Goldberg foot-breaking machine does. The Rube Goldberg foot-breaking machine, though complex, is transparent and has an outcome that is “designed in”: it will smash feet. With machine learning algorithms, there is a break between the designer or user and the product. The outcome is not specifically intended in the way smashing feet is intended by a user of the Rube Goldberg machine. Indeed, it is not even known by the user of the algorithm.

The behavior or choice in cases of machine learning algorithms originate in the artificial intelligence in a way that foot smashing doesn’t originate in the Rube Goldberg machine. Consider: we wouldn’t hold the Rube Goldberg machine liable for a broken foot, but would rather look to the operator or designer.  However, in cases of machine learning, the user or designer didn’t come up with the output of the algorithm.

When Deepmind won at Go, it was making choices that surprised all of the computer scientists involved. AI make complex decisions and take actions completely unforeseen by their creators, so when their decisions result in injury, where do we look to apportion blame? It is still the case that you cannot sue algorithms or AI (and, further, the remuneration or punishment would be difficult to imagine).  

One model for AI liability interprets machine learning functions in terms of existing product liability frameworks that put burdens of appropriate operation on the producers. The assumptions here are that any harm resulting by products is due to faulty products and the company is liable regardless of mens rea (See, for instance, Escola v Coca-Cola Bottling Co.). In this framework, the companies that produce the algorithms would be liable for harms that result from smart cars or financial decisions.

Were this framework adopted, Li could be suing the AI company that produced or sold K1, 42.cx, but as it stands, the promises involved in the sale conform to our current legal standards. The interpretations at stake are whether K1 could have been predicted to make the decision that resulted in losses given the description in the terms of sale.