← Return to search results
Back to Prindle Institute

Can Machines Be Morally Responsible?

photograph of robot in front of chalkboard littered with question marks

As artificial intelligence becomes more advanced, we find ourselves relying more and more on the decision-making of neural nets and other complex AI systems. If the machine can think and decide in ways that cannot be easily traced back to the decision of one or multiple programmers, who do we hold responsible if, for instance, the AI decision-making reflects the biases and prejudices that we have as human beings? What if someone is hurt by the machine’s discrimination?

To answer this question, we need to know what makes someone or something responsible. The machine certainly causes the processing it performs and the decisions it makes, but is the AI system a morally responsible agent?

Could artificial intelligence have the basic abilities required to be an appropriate target of blame?

Some philosophers think that the ability that is core to moral responsibility is control or choice. While sometimes this ability is spelled out in terms of the freedom to do otherwise, let’s set aside questions of whether the AI system is determined or undetermined. There are some AI systems that do seem to be determined by fixed laws of nature, but there are others that use quantum computing and are indeterminate, i.e., they won’t produce the same answers even if given the same inputs under the same conditions. Whether you think that determinism or indeterminism is required for responsibility, there will be at least some AI systems that will fit that requirement. Assume for what follows that the AI system in question is determined or undetermined, according to your philosophical preferences.

Can some AI systems exercise control or engage in decision-making? Even though AI decision-making processes will not, as of this moment, directly mirror the structure of decision-making in human brains, AI systems are still able to take inputs and produce a judgment based on those inputs. Furthermore, some AI decision-making algorithms outcompete human thought on the same problems. It seems that if we were able to get a complex enough artificial intelligence that could make its own determinations that did not reduce to its initial human-made inputs and parameters, we might have a plausible autonomous agent who is exercising control in decision-making.

The other primary capacity that philosophers take to be required for responsibility is the ability to recognize reasons. If someone couldn’t understand what moral principles required or the reasons they expressed, then it would be unfair to hold them responsible. It seems that sophisticated AI can at least assign weights to different reasons and understand the relations between them (including whether certain reasons override others). In addition, AI that are trained on images of a certain medical condition can come to recognize the common features that would identify someone as having that condition. So, AI can come to identify reasons that were not explicitly plugged into them in the first place.

What about the recognition of moral reasons? Shouldn’t AI need to have a gut feeling or emotional reaction to get the right moral answer?

While some philosophers think that moral laws are given by reason alone, others think that feelings like empathy or compassion are necessary to be moral agents. Some worry that without the right affective states, the agent will wind up being a sociopath or psychopath, and these conditions seem to inhibit responsibility. Others think that even psychopaths can be responsible, so long as they can understand moral claims. At the moment, it seems that AI cannot have the same emotional reactions that we do, though there is work to develop AI that can.

Do AI need to be conscious to be responsible? Insofar as we allow that humans can recognize reasons unconsciously and that they can be held responsible for those judgments, it doesn’t seem that consciousness is required for reasons-recognition. For example, I may not have the conscious judgment that a member of a given race is less hard-working, but that implicit bias may still affect my hiring practices. If we think it’s appropriate to hold me responsible for that bias, then it seems that consciousness isn’t required for responsibility. It is a standing question as to whether some AI might develop consciousness, but either way, it seems plausible that an AI system could be responsible at least with regard to the capacity of reasons-recognition. Consciousness may be required for choice on some models, though other philosophers allow that we can be responsible for automatic, unconscious, yet intentional actions.

What seems true is that it is possible that there will at some point be an artificial intelligence that meets all of the criteria for moral responsibility, at least as far as we can practically tell. When that happens, it appears that we should hold the artificial intelligence system morally responsible, so long as there is no good reason to discount responsibility — the mere fact that the putative moral agent was artificial wouldn’t undermine responsibility. Instead, a good reason might look like evidence that the AI can’t actually understand what morality requires it to do, or maybe that the AI can’t make choices in the way that responsibility requires. Of course, we would need to figure out what it looks like to hold an AI system responsible.

Could we punish the AI? Would it understand blame and feel guilt? What about praise or rewards? These are difficult questions that will depend on what capacities the AI has.

Until that point, it’s hard to know who to blame and how much to blame them. What do we do if an AI that doesn’t meet the criteria for responsibility has a pattern of discriminatory decision-making? Return to our initial case. Assume that the AI’s decision-making can’t be reduced to the parameters set by its multiple creators, who themselves appear without fault. Additionally, the humans who have relied on the AI have affirmed the AI’s judgments without recognizing the patterns of discrimination. Because of these AI-assisted decisions, several people have been harmed. Who do we hold responsible?

One option would be to have there be a liability fund attached to the AI, such that in the event of discrimination, those affected can be compensated. There is some question here as to who would pay for the fund, whether that be the creators or the users or both. Another option would be to place the responsibility on the person relying on the AI to aid in their decision-making. The idea here would be that the buck stops with the human decision-maker and that the human decision-maker needs to be aware of possible biases and check them. A final option would be to place the responsibility on the AI creators, who, perhaps without fault created the discriminatory AI, but took on the burden of that potential consequence by deciding to enter the AI business in the first place. They might be required to pay a fine or take measures to retrain the AI to avoid the discrimination in the first place.

The right answer, for now, is probably some combination of the three that can recognize the shared decision-making happening between multiple agents and machines. Even if AI systems become responsible agents someday, shared responsibility will likely remain.

Uninformed Public is Danger to Democracy

The economy continues to struggle, the educational system underperforms and tensions exist at just about every point on the international landscape. And there is a national presidential selection process underway. It seems, in such an environment, that citizens would feel compelled to get themselves fully up to date on news that matters. It also would stand to reason that the nation’s news media would feel an obligation to focus on news of substance.

Instead, too many citizens are woefully uninformed of the day’s significant events. A pandering media, primarily television, is content to post a lowest-common-denominator news agenda, featuring Beyoncé’s “Lemonade” release and extensive tributes to Prince.

Constitutional framer James Madison once famously wrote, “Knowledge will forever govern ignorance. And a people who mean to be their own governors must arm themselves with the power which knowledge gives.” Citizens who are unable or unwilling to arm themselves with civic knowledge diminish the nation’s ability to self-govern.

Technological advances have made it easier than ever for citizens to stay informed. The days of waiting for the evening television news to come on or the newspaper to get tossed on your doorstep are long gone. News is available constantly and from multiple sources.

A growing number of citizens, particularly millennials, now rely on social media for “news.” While that might seem like a convenient and timely way to stay informed, those people aren’t necessarily aware of anything more than what their friends had for lunch. Data from the Pew Research Center indicates that about two-thirds of Twitter and Facebook users say they get news from those social media sites. The two “news” categories of most interest among social media consumers, however, are sports and entertainment updates.

Sadly, only about a third of social media users follow an actual news organization or recognized journalist. Thus, the information these people get is likely to be only what friends have posted. Pew further reports that during this election season, only 18 percent of social media users have posted election information on a site. So, less than a fifth of the social media population is helping to determine the political agenda for the other 80 percent.

The lack of news literacy is consistent with an overall lack of civic literacy in our culture. A Newseum Institute survey last year found that a third of Americans failed to name a single right guaranteed in the First Amendment. Forty-three percent could not name freedom of speech as one of those rights.

A study released earlier this year by the American Council of Trustees and Alumni had more frightening results. In a national survey of college graduates, with a multiple-choice format, just 28 percent of respondents could name James Madison as father of the Constitution. That’s barely better than random chance out of four choices on the survey. Almost half didn’t know the term lengths for U.S. senators and representatives. And almost 10 percent identified Judith Sheindlin (Judge Judy) as being on the Supreme Court.

The blame for an under-informed citizenry can be shared widely. The curriculum creep into trendy subjects has infected too many high schools and colleges, diminishing the study of public affairs, civics, history and news literacy.

The television news industry has softened its news agenda to the point where serious news consumers find little substance. Television’s coverage of this presidential election cycle could prompt even the most determined news hounds to tune out. The Media Research Center tracked how the big three broadcast networks covered the Trump campaign in the early evening newscasts of March. The coverage overwhelmingly focused on protests at Trump campaign events, assault charges against a Trump campaign staffer and Trump’s attacks on Heidi Cruz. Missing from the coverage were Trump’s economic plans, national security vision or anything else with a policy dimension.

When the Constitutional Convention wrapped up in 1787, Benjamin Franklin emerged from the closed-door proceedings and was asked what kind of government had been formed. He replied, “A republic, if you can keep it.” Those citizens who, for whatever reasons, are determined to remain uninformed, make it harder to keep that republic intact. Our nation, suffering now from political confusion and ugly protests, sorely needs a renewed commitment to civic knowledge.