← Return to search results
Back to Prindle Institute

High Theory and Ethical AI

There’s been a push to create ethical AI through the development of moral principles embedded into AI engineering. But debate has recently broken out as to what extent this crusade is warranted. Reports estimate that there are at least 70 sets of ethical AI principles proposed by governments, companies, and ethics organizations. For example, the EU adopted its Ethical Guidelines for Trustworthy AI which prescribes adherence to four basic principles: respect for human autonomy, prevention of harm, as well as a commitment to fairness and explicability.

But critics charge that these precepts are so broad and abstract as to be nearly useless. Without clear ways to translate principle into practice, they are nothing more than hollow virtue signaling. Who’s right?

Because of the novel ethical issues that AI creates, there aren’t pre-existing ethical norms to govern all use cases. To help develop ethics governance, many bodies have borrowed a “high theory” approach from bioethics – solving ethical problems involves the application of abstract (or “high”) ethical principles to specific problems. For example, utilitarianism and deontology are usually considered high level theories and a high theory approach to bioethics would involve determining how to apply these principles in specific cases. In contrast, a low theory approach is built from the ground up by looking at individual cases first instead of principles.

Complaints about the overreliance on principles in bioethics are well known. Steven Toulmin’s “The Tyranny of Principles” notes how people can often agree on actions, but still disagree about the principle. Brent Mittelstadt has argued against high theory approaches in AI because of the logistical issues that separates tech ethics from bioethics. He notes, for example, that unlike medicine which has always has the common aim of promoting health of a patient, AI development has no common aim.

AI development is not a formal profession that entails certain fiduciary responsibilities and obligations. There is no notion of what a “good” AI developer is relative to a “good” doctor.

As Mittelstadt emphasizes, “the absence of a fiduciary relationships in AI means that users cannot trust that developers will act in their best interests when implementing ethical principles in practice.” He also argues that unlike medicine where the effects of clinical decision-making are often immediate and observable, the impact of decisions in AI development may never be apparent to developers. AI systems are often opaque in the sense that no one person has a full understanding of the system’s design or function. The difficulty of tracing decisions, impacts, and ethical responsibilities for various decisions becomes incredibly confusing. For similar reasons, the broad spectrum of actors involved in AI development, all coming from different technical and professional backgrounds, means that there is no common culture to ensure that abstract principles are collectively understood. Making sure that AI is “fair,” for example, would not be specific enough to be action-guiding for all contributors regarding development and end-use.

Consider the recent case of the AI rapper who given a record deal only to have the deal dropped after a backlash over racial stereotypes, or the case of the AI who recently won an art contest over real artists and all the developers involved in making those projects possible.

Is it likely they share a common understanding of a concept like prevention of harm, or a similar way of applying it? Might special principles apply to things like the creation of art?

Mittelstadt points out that high level principles are uniquely applicable in medicine because there are proven methods in the field to translate principles into practice. All those professional societies, ethics review boards, licensing schemes, and codes of conduct help to do this work by comparing cases and identifying negligent behavior. Even then, high level principles rarely explicitly factor into clinical decision-making. By comparison, the AI field has no similar shared institutions to allow for the translation of high-level principles into mid-level codes of conduct, and it would have to factor in elements of the technology, application, context of use, and local norms. This is why even as new AI ethics advisory boards are created, problems persist. While these organizations can prove useful, they also face immense challenges owing to the disconnect between developers and end users.

Despite these criticisms, there are those who argue that high-level ethical principles are crucial for developing ethical AI. Elizabeth Seger has argued that building the kinds of practices that Mittelstadt indicates require a kind of “start-point” that moral principles can provide. Those principles provide a road map and suggest particular avenues for further research.

They represent a first step towards developing the necessary practices and infrastructure, and  cultivate a professional culture by establishing behavioral norms within the community.

High-level AI principles, Seger argues, provide a common vocabulary AI developers can use to discuss design challenges and weigh risks and harms. While AI developers already follow principles of optimization and efficiency, a cultural shift around new principles can augment the already existing professional culture. The resulting rules and regulations will have greater efficacy if they appeal to cultural norms and values held by the communities they are applied to. And if the professional culture is able to internalize these norms, then someone working in it will be more likely to respond to the letter and spirit of the policies in place.

It may also be the case that different kinds of ethical problems associated with AI will require different understandings of principles and different application of them during the various stages of development. As Abhishek Gupta of the Montreal AI Ethics Institute has noted, the sheer number of sets of principles and guidelines that attempt to break down or categorize subdomains of moral issues presents an immense challenge. He suggests categorizing principles according to the specific areas – privacy and security, reliability and safety, fairness and inclusiveness, and transparency and accountability – and working on developing concrete applications of those principles within each area.

With many claiming that adopting sets of ethics principles in AI is just “ethics washing,” and with AI development being so broad, perhaps the key to regulating AI is not to focus on what principles should be adopted, but to focus on how the AI development field is organized. It seems like whether we start with high theory or not, getting different people from different backgrounds to speak a common ethics language is he first step and one that may require changing the profession of AI development itself.

Is Now the Time for an Economics Code of Conduct?

photograph of various banknotes from around the world

One complication of the coronavirus crisis is that it requires that policy decisions weigh public health issues against economic concerns. Economic advisors should be conscious of their own uncertainty as well as the significant and long-term consequences for those acting on their advice. A recent problematic example includes economic advisor Peter Navarro attempting to influence decision making over the use of hydroxychloroquine as a “cure” by claiming his background in statistics made him qualified to address public health matters. While I suspect few would agree with this kind of policy advising, economist advisors still have a vital role to play in conversations regarding the reopening of the economy. Now that the projected infection rates and fatalities of COVID-19 have been revised downward in many regions, concern has shifted to how and when the economy should be restarted. Economic advisors will give advice (and have now given) that could have significant public health consequences. This raises the following question: Given that other professions who work for the public good must adhere to codes of professional ethics, is it time for economists to do the same?

First, we need to consider in general terms why this issue is so pertinent now. With mounting job losses and a prolonged period without production, some of the economic forecasts are grim. The risks are so great the economic downturn could mirror the Great Depression. The hope is that once restrictions are rescinded, we will be facing a “V-shaped” recession where a sudden downturn is followed by a sudden upswing. But the longer the restrictions are in place, the greater risk there is that the economy will take longer to recover. Alternatively, there is the risk that if restrictions are lifted too soon, there will be a second wave of infections without a vaccine. This appears to pit economic concerns against public health concerns, however, the problem is complicated by the fact that a recurring public health crisis would be even more costly to the economy than the current downturn. According to economist Andrew Atkeson, if the epidemic continues to grow the economy will grind to a halt anyways. Even if reopening the economy is warranted, such efforts will be problematic for economic and public health if it is done haphazardly. Economic advising always involves ethical issues, but it is this current question that highlights the ethical significance that policy advice can have.

One might expect that economists, given their potential to bring about significant ethically salient consequences, would have an ethical code to turn to. Such codes are common in other professions which are relevant to the public good. For example, engineering students in Canada and in the United States graduate with a ceremony where they recognize their ethical obligations to the discipline and to the public good, and they wear a ring as a symbol of their commitment to those obligations. Other fields (accountants, lawyers, journalists, and more) are bound by professional codes of conduct. In Western medicine, it is common for students to affirm the Hippocratic Oath. Many of these professional codes stress the importance of nonmaleficence, professional integrity, transparency, and accountability. Economists have no such oaths which they are expected to affirm or swear by.

Of course, one may ask why any kind of professional code of ethics, particularly when it comes to policy advice, is necessary? According to a value-free ideal of science, the conduct of research and the application of research are two different things. In order to keep the study of economics as non-political and value-free as possible, economists must only consider the accuracy of their findings and report those findings accurately to policy makers; after that, the political and ethical concerns belong to policy makers alone. For example, in his 1956 paper “Valuation and Acceptance of Scientific Hypotheses” Richard C. Jeffrey argues that scientists are only supposed to assign probabilities to hypotheses and then allow the acceptance of these hypotheses to be a matter of public acceptance. So, economists should be isolated from policy making and concerns about the public good as their only function is merely to analyze the data.

This argument became prominent in many different forms in the 20th century. Robert Nelson, an economist who formerly worked in the Office of Policy Analysis in the Office of the Secretary of the Interior for almost 20 years, notes in his own working experiences the force that this thinking had. Identifying the desire to clearly separate science from politics as a matter of progressive-era thinking, he notes that while this was the expectation, it was never a matter of practice. He explains:

“Economy policy analysts in government, as I was discovering, were not simply told to study the technical means of implementing a given policy and to report the scientific results back to their superiors. Rather economy policy analysis often functioned themselves as strong advocates for particular policy positions.”

Part of the problem, as Nelson explains it, is that there is a gap between democratic institutions and the degree of expertise required to make complex choices. An expert-policy advisor cannot simply analyze the data and relay their findings because neither the public nor many of these decision makers have the expertise to know what to do with that information. This creates a practical obstacle to the value-free ideal.

In addition, the mere use of certain data or certain statistical indicators can have political salience. As Susan Offutt notes, measurements like unemployment can have political consequences. But so does a lack of agreement on how to measure poverty or a “green” GDP. Deciding what is measured and how is a matter for economists to determine. The analyses found in policy advising are already politically influential even if it is the policy makers who ultimately decide what to do with that information.

Economic advisors right now need to balance a number of concerns. Should the focus be on securing public health? Should the focus be on economic growth? Should personal liberty be a factor? Some of the arguments for establishing an ethical code for economists draw analogies between fields like medicine and environmental policymaking. For example, like the field of medicine there is a distinction between experts and those who are the target of that expertise. This creates asymmetries in power, status, and knowledge. In a doctor-patient relationship this asymmetrical relationship creates ethical responsibilities for the physician to do no harm to a patient. This means that they recognize the degree of uncertainty before advising and recommending treatments, and do not arbitrarily violate the patient’s expressed wishes.

In contrast, economist George DeMartino has argued that economists working for institutions like the IMF, the World Bank, and others have pursued policies on the basis of optimal anticipated outcome rather than risk of failure. He describes how for decades inhabitants of developing countries have been subject to policies based on this thinking and have suffered for it. He explains:

“The 1980s inaugurated an extraordinary, sustained period of avoidable human suffering in the South, a chief cause of which was the failed neoliberal experiment. I use the word ‘experiment’ purposefully, since it seemed clear then and certainly does now that this was an instance in which economists took advantage of an extraordinary, historically unprecedented opportunity to design and test-drive a shiny new economic model over the objections of what were essentially unwilling subjects across the South.”

Would it be ethical for a doctor to advise risky treatments and then to have them carried out against a patient’s wishes? No. So, why should economists be treated differently if they are capable of causing harm on a large scale? Even if medical codes of ethics are not suited to economics, the relevant differences between medicine and economics do not lead to the conclusion that ethics should be of no concern to the economist.

Returning to our current crisis, stop and think about the potential for death, poverty, unemployment, misery, and suffering that is riding on the decisions which are being influenced by policy advisors right now. Should these people be held accountable to an ethical code of conduct?

In his 2005 paper DeMartino notes that despite the power and responsibilities that economic advisors can wield, there is no professional ethics body within the field of economics. Even today, prestigious economics programs at MIT and Princeton do not require economic ethical training. At the end of his paper, DeMartino’s prospective “Economist’s Oath” makes reference to using one’s power for the community good, it specifies that communities are not mere means to ends, and it declares that economics is an imperfect science that carries risks and dangers. Much of what this means in practice would need to be clarified over time, but as a resource to turn to, it could be a promising start. Given that many of these dangers and risks are now present in the COVID-19 crisis, the time may have come when the public should not only expect that economic advisors follow an economics ethical code, they should demand it.

In Search of an AI Research Code of Conduct

image of divided brain; fluid on one side, curcuitry on the other

The evolution of an entire industry devoted to artificial intelligence has presented a need to develop ethical codes of conduct. Ethical concerns about privacy, transparency, and the political and social effects of AI abound. But a recent study from the University of Oxford suggests that borrowing from other fields like medical ethics to refine an AI code of conduct is problematic. The development of an AI ethics means that we must be prepared to address and predict ethical problems and concerns that are entirely new, and this makes it a significant ethical project. How we should proceed in this field is itself a dilemma. Should we proceed in a top-down principled approach or a bottom up experimental approach?

AI ethics can concern itself with everything from the development of intelligent robots to machine learning, predictive analytics, and the algorithms behind social media websites. This is why it is such an expansive area with some focusing on the ethics of how we should treat artificial intelligence, others focusing on how we can protect privacy, or some on how the AI behind social media platforms and AI capable of generating and distributing ‘fake news’ can influence the political process. In response many have focused on generating a particular set of principles to guide AI researchers; in many cases borrowing from codes governing other fields, like medical ethics.

The four core principles of medical ethics are respect for patient autonomy, beneficence, non-maleficence, and justice. Essentially these principles hold that one should act in the best interests of a patient while avoiding harms and ensuring fair distribution of medical services. But the recent Oxford study by Brent Mittelstadt argues that the analogical reasoning relating the medical field to the AI field is flawed. There are significant differences between medicine and AI research which makes these principles not helpful or irrelevant.

The field of medicine is more centrally focused on promoting health and has a long history of focusing on the fiduciary duties of those in the profession towards patients. Alternatively, AI research is less homogeneous, with different researchers in both the public and private sector working on different goals and who have duties to different bodies. AI developers, for instance, do not commit to public service in the same way that a doctor does, as they may only responsible to shareholders. As the study notes, “The fundamental aims of developers, users, and affected parties do not necessarily align.”

In her book Towards a Code of Ethics for Artificial Intelligence Paula Boddington highlights some of the challenges of establishing a code of ethics for the field. For instance, those working with AI are not required to receive accreditation from any professional body. In fact,

“some self-taught, technically competent person, or a few members of a small scale start up, could be sitting in their mother’s basement right now dreaming up all sorts of powerful AI…Combatting any ethical problems with such ‘wild’ AI is one of the major challenges.”

Additionally, there are mixed attitudes towards AI and its future potential. Boddington notes a divide in opinion: the West is more alarmist as compared to nations like Japan and Korea which are more likely to be open and accepting.

Given these challenges, some have questioned whether an abstract ethical code is the best response. High-level principles which are abstract enough to cover the entire field will be too vague to be action-guiding, and because of the various different fields and interests, oversight will be difficult. According to Edd Gent,

“AI systems are…created by large interdisciplinary teams in multiple stages of development and deployment, which makes tracking the ethical implications of an individual’s decisions almost impossible, hampering our ability to create standards to guide those choices.”

The situation is not that different from work done in the sciences. Philosopher of science Heather Douglas has argued, for instance, that while ethical codes and ethical review boards can be helpful, constant oversight is impractical, and that only scientists can fully appreciate the potential implications of their work. The same could be true of AI researchers. A code of principles of ethics will not replace ethical decision-making; in fact, such codes can be morally problematic. As Boddington argues, “The very idea of parceling ethics into a formal ‘code’ can be dangerous.” This is because many ethical problems are going to be new and unique so ethical choice cannot be a matter of mere compliance. Following ethical codes can lead to complacency as one seeks to check certain boxes and avoid certain penalties without taking the time to critically examine what may be new and unprecedented ethical issues.

What this suggests is that any code of ethics can only be suggestive; they offer abstract principles that can guide AI researchers, but ultimately the researchers themselves will have to make individual ethical judgments. Thus, part of the moral project of developing an AI ethics is going to be the development of good moral judgment by those in the field. Philosopher John Dewey noted this relationship between principles and individual judgment, arguing:

“Principles exist as hypotheses with which to experiment…There is a long record of past experimentation in conduct, and there are cumulative, verifications which give many principles a well earned prestige…But social situations alter; and it is also foolish not to observe how old principles actually work under new conditions, and not to modify them so that they will be more effectual instruments in judging new cases.”

This may mirror the thinking of Brent Mittelstadt who argues for a bottom-up approach to AI ethics that focuses on sub-fields developing ethical principles as a response to resolving challenging novel cases. Boddington, for instance, notes the importance of equipping researchers and professionals with the ethical skills to make nuanced decisions in context; they must be able to make contextualized interpretations of rules, and to judge when rules are no longer appropriate. Still, such an approach has its challenges as researchers must be aware of the ethical implications of their work, and there still needs to be some oversight.

Part of the solution to this is public input. We as a public need to make sure that corporations, researchers, and governments are aware of the public’s ethical concerns. Boddington recommends that in such input there be a diversity of opinion, thinking style, and experience. This includes not only those who may be affected by AI, but also professional experts outside of the AI field like lawyers, economists, social scientists, and even those who have no interest in the world of AI in order maintain an outside perspective.

Codes of ethics in AI research will continue to develop. The dilemma we face as a society is what such a code should mean, particularly whether it will be institutionalized and enforced or not. If we adopt a bottom up approach, then such codes will likely be only there for guidance or will require the adoption of multiple codes for different areas. If a more principled top-down approach is adopted, then there will be additional challenges of dealing with the novel and with oversight. Either way, the public will have a role to play to ensure that its concerns are being heard.