← Return to search results
Back to Prindle Institute

Profiting from Pandemic

headshots of Richard Burr and Kelly Loeffler

During the last week of March, it was widely reported that members of Congress used information from their privileged briefings on COVID-19 to adjust their holdings in the stock market before the information was made public. Politicians including Georgia Senator Kelly Loeffler, North Carolina Senator Richard Burr, Oklahoma Senator Jim Inhofe, and California Senator Diane Feinstein all sold suspicious amounts of their holdings around the time of briefings about the oncoming epidemic. It would be illegal if these allegations turn out to be true: financially benefiting based on actions performed based on non-public information is against the law for members of Congress. However, it is legal for members of Congress to hold stocks, and buying and selling financial material or benefiting financially from holdings while a member of Congress is fine legally. This makes evaluating the activity of Congress people difficult, as the legality of their behavior depends on the grounds for their activity.

That we need to determine the mental state of the actor in order to determine the legality of the behavior is not unique to these circumstances. Indeed, it is common in the law for behavior to only be considered criminal if someone performs an action intentionally, knowingly, recklessly, or negligently – all states of mind. Courts and lawyers are adept at creating standards for testing what would qualify as the relevant mental state (or mens rea) for particular crimes, and investigations are underway.

In these circumstances, the possibility that members of Congress may have financially benefited from privileged information is troubling for further reasons. The particular briefings the public servants received concerned the oncoming epidemic that would have dramatic impact not only on the economy but on public health and safety. Their estimates of the impact of this epidemic would be what led to the alleged adjustments in their investments, and therefore they would have been informed and concerned about the epidemic weeks or months before taking any action to mitigate the oncoming national crisis.

The lack of action seems straightforwardly unethical, especially in light of the continued lack of support and action on the part of the federal government as the national crisis escalates and shows all signs of continuing to escalate. The federal government has not intervened sufficiently. After passing a one-time $2 trillion dollar stimulus package, the Senate is no longer in session.

Regarding their use of the information for personal gain: Is it reasonable to expect those with privileged information that they could greatly benefit from to avoid taking steps to act on that information? How about if it was reasonably certain they wouldn’t get caught? Folks with privilege and power frequently don’t get caught, and when they do, the penalties for their malfeasance can be much less onerous than the benefits they received by skirting the moral and legal demands that constrain the actions of us all. Some views of human nature are explicitly predicated on the assumption that we are self-interested, so the “rational” action in such cases would be to benefit from the information they had. This line of reasoning supports a ban on those who have such privileged information from advantaging themselves from it, and using it as a privilege over those who don’t have access to it. Some members of Congress who are currently accused of insider trading in fact support such bans.

Establishing Liability in Artificial Intelligence

Entrepreneur Li Kin-kan is suing over “investment losses triggered by autonomous machines.” Raffaele Costa convinced Li to let K1, a machine learning algorithm, manage $2.5 billion—$250 million of his own cash and the rest leverage from Citigroup Inc. The AI lost a significant amount of money in a decision that the company claims it wouldn’t have made if it was as sophisticated as they had been led to believe. Because of the autonomous decision-making structure of K1, trying to locate appropriate liability is a provocative question: is the money-losing decision the fault of K1, its designers, Li, or, as Li alleges, the salesman who made claims about K1’s potential?  

Developed by Austria-based AI company 42.cx, the supercomputer named K1 would “comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.”

Our current laws are designed to assign responsibility on the basis of intention or ability to predict an injury. Algorithms do neither, but are being put to more and more tasks that can produce legal injuries in novel ways. In 2014, the Los Angeles Times published an article that carried the byline: “this post was created by an algorithm written by the author.” The author of the algorithm, Ken Schwencke, allowed the code to produce a story covering and earthquake, not an uncommon event around LA, so tasking an algorithm to produce the news was a time-saving strategy. However, journalism by code can lead to complicated libel suits, as legal theorists discussed when Stephen Colbert used an algorithm to match Fox News personalities with movie reviews from Rotten Tomatoes. Though the claims produced were satire, there could have been a case for libel or defamation, though without a human agent as the direct producer of the claim: “The law would then face a choice between holding someone accountable for a result she did not specifically intend, or permitting without recourse what most any observer would take for defamatory or libelous speech.”

Smart cars are being developed that can cause physical harm and injury based on the decisions of their machine learning algorithms. Further, artificial speech apps are behaving in unanticipated ways: “A Chinese app developer pulled its instant messaging “chatbots”—designed to mimic human conversation—after the bots unexpectedly started criticizing communism. Facebook chatbots began developing a whole new language to communicate with each other—one their creators could not understand.”

Consider: machine-learning algorithms accomplish tasks in ways that cannot be anticipated in advance (indeed, that’s why they are implemented – to do creative, not purely scripted work); and thus they increasingly blur the line between person and instrument, for the designer did not explicitly program how the task will be performed.

When someone directly causes injury, for instance by causing bodily harm with their body, it is easy to isolate them as the cause. If someone stomps on your foot, this could cause a harm. According to the law, then, they can be held liable if they have the appropriate mens rea, or guilty mind. For instance, if they intended to cause that injury, knowingly caused the injury, recklessly caused the injury, or negligently caused the injury.

This structure for liability seems to work just as well if the person in question used a tool or instrument. If someone uses a sledgehammer to break your foot, they still are isolated as the cause (as the person moving the sledgehammer around), and can be held liable depending on what their mental state was regarding the sledgehammer-hitting-your-foot (perhaps it was a non-culpable accident). Even if they use a  complicated Rube Goldberg Machine to break your foot, the same structure seems to work just fine. If someone uses a foot-breaking Rube Goldberg Machine to break your foot, they’ve caused you an injury, and depending on their particular mens rea will be liable for some particular legal violation.

Machine learning algorithms put pressure on this framework, however, because when they are used it is not to produce a specific result in the way the Rube Goldberg foot-breaking machine does. The Rube Goldberg foot-breaking machine, though complex, is transparent and has an outcome that is “designed in”: it will smash feet. With machine learning algorithms, there is a break between the designer or user and the product. The outcome is not specifically intended in the way smashing feet is intended by a user of the Rube Goldberg machine. Indeed, it is not even known by the user of the algorithm.

The behavior or choice in cases of machine learning algorithms originate in the artificial intelligence in a way that foot smashing doesn’t originate in the Rube Goldberg machine. Consider: we wouldn’t hold the Rube Goldberg machine liable for a broken foot, but would rather look to the operator or designer.  However, in cases of machine learning, the user or designer didn’t come up with the output of the algorithm.

When Deepmind won at Go, it was making choices that surprised all of the computer scientists involved. AI make complex decisions and take actions completely unforeseen by their creators, so when their decisions result in injury, where do we look to apportion blame? It is still the case that you cannot sue algorithms or AI (and, further, the remuneration or punishment would be difficult to imagine).  

One model for AI liability interprets machine learning functions in terms of existing product liability frameworks that put burdens of appropriate operation on the producers. The assumptions here are that any harm resulting by products is due to faulty products and the company is liable regardless of mens rea (See, for instance, Escola v Coca-Cola Bottling Co.). In this framework, the companies that produce the algorithms would be liable for harms that result from smart cars or financial decisions.

Were this framework adopted, Li could be suing the AI company that produced or sold K1, 42.cx, but as it stands, the promises involved in the sale conform to our current legal standards. The interpretations at stake are whether K1 could have been predicted to make the decision that resulted in losses given the description in the terms of sale.