Back to Prindle Institute

Establishing Liability in Artificial Intelligence

Entrepreneur Li Kin-kan is suing over “investment losses triggered by autonomous machines.” Raffaele Costa convinced Li to let K1, a machine learning algorithm, manage $2.5 billion—$250 million of his own cash and the rest leverage from Citigroup Inc. The AI lost a significant amount of money in a decision that the company claims it wouldn’t have made if it was as sophisticated as they had been led to believe. Because of the autonomous decision-making structure of K1, trying to locate appropriate liability is a provocative question: is the money-losing decision the fault of K1, its designers, Li, or, as Li alleges, the salesman who made claims about K1’s potential?  

Developed by Austria-based AI company, the supercomputer named K1 would “comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.”

Our current laws are designed to assign responsibility on the basis of intention or ability to predict an injury. Algorithms do neither, but are being put to more and more tasks that can produce legal injuries in novel ways. In 2014, the Los Angeles Times published an article that carried the byline: “this post was created by an algorithm written by the author.” The author of the algorithm, Ken Schwencke, allowed the code to produce a story covering and earthquake, not an uncommon event around LA, so tasking an algorithm to produce the news was a time-saving strategy. However, journalism by code can lead to complicated libel suits, as legal theorists discussed when Stephen Colbert used an algorithm to match Fox News personalities with movie reviews from Rotten Tomatoes. Though the claims produced were satire, there could have been a case for libel or defamation, though without a human agent as the direct producer of the claim: “The law would then face a choice between holding someone accountable for a result she did not specifically intend, or permitting without recourse what most any observer would take for defamatory or libelous speech.”

Smart cars are being developed that can cause physical harm and injury based on the decisions of their machine learning algorithms. Further, artificial speech apps are behaving in unanticipated ways: “A Chinese app developer pulled its instant messaging “chatbots”—designed to mimic human conversation—after the bots unexpectedly started criticizing communism. Facebook chatbots began developing a whole new language to communicate with each other—one their creators could not understand.”

Consider: machine-learning algorithms accomplish tasks in ways that cannot be anticipated in advance (indeed, that’s why they are implemented – to do creative, not purely scripted work); and thus they increasingly blur the line between person and instrument, for the designer did not explicitly program how the task will be performed.

When someone directly causes injury, for instance by causing bodily harm with their body, it is easy to isolate them as the cause. If someone stomps on your foot, this could cause a harm. According to the law, then, they can be held liable if they have the appropriate mens rea, or guilty mind. For instance, if they intended to cause that injury, knowingly caused the injury, recklessly caused the injury, or negligently caused the injury.

This structure for liability seems to work just as well if the person in question used a tool or instrument. If someone uses a sledgehammer to break your foot, they still are isolated as the cause (as the person moving the sledgehammer around), and can be held liable depending on what their mental state was regarding the sledgehammer-hitting-your-foot (perhaps it was a non-culpable accident). Even if they use a  complicated Rube Goldberg Machine to break your foot, the same structure seems to work just fine. If someone uses a foot-breaking Rube Goldberg Machine to break your foot, they’ve caused you an injury, and depending on their particular mens rea will be liable for some particular legal violation.

Machine learning algorithms put pressure on this framework, however, because when they are used it is not to produce a specific result in the way the Rube Goldberg foot-breaking machine does. The Rube Goldberg foot-breaking machine, though complex, is transparent and has an outcome that is “designed in”: it will smash feet. With machine learning algorithms, there is a break between the designer or user and the product. The outcome is not specifically intended in the way smashing feet is intended by a user of the Rube Goldberg machine. Indeed, it is not even known by the user of the algorithm.

The behavior or choice in cases of machine learning algorithms originate in the artificial intelligence in a way that foot smashing doesn’t originate in the Rube Goldberg machine. Consider: we wouldn’t hold the Rube Goldberg machine liable for a broken foot, but would rather look to the operator or designer.  However, in cases of machine learning, the user or designer didn’t come up with the output of the algorithm.

When Deepmind won at Go, it was making choices that surprised all of the computer scientists involved. AI make complex decisions and take actions completely unforeseen by their creators, so when their decisions result in injury, where do we look to apportion blame? It is still the case that you cannot sue algorithms or AI (and, further, the remuneration or punishment would be difficult to imagine).  

One model for AI liability interprets machine learning functions in terms of existing product liability frameworks that put burdens of appropriate operation on the producers. The assumptions here are that any harm resulting by products is due to faulty products and the company is liable regardless of mens rea (See, for instance, Escola v Coca-Cola Bottling Co.). In this framework, the companies that produce the algorithms would be liable for harms that result from smart cars or financial decisions.

Were this framework adopted, Li could be suing the AI company that produced or sold K1,, but as it stands, the promises involved in the sale conform to our current legal standards. The interpretations at stake are whether K1 could have been predicted to make the decision that resulted in losses given the description in the terms of sale.