← Return to search results
Back to Prindle Institute
Technology

Black-Box Expertise and AI Discourse

By Kenneth Boyd
7 Aug 2023
image of black box highlighted on stage

It has recently been estimated that new generative AI technology could add up to $4.4 trillion to the global economy. This figure was reported by The New York Times, Bloomberg, Yahoo Finance, The Globe and Mail, and dozens of other news outlets and websites. It’s a big, impressive number that has been interpreted by some as even more reason to get excited about AI, and by others to add to a growing list of concerns.

The estimate itself came from a report recently released by consulting firm McKinsey & Company. As the authors of the report prognosticate, AI will make a significant impact in the kinds of tasks that can be performed by AI instead of humans: some of these tasks are relatively simple, such as creating “personalized emails,” while others are more complex, such as “communicating with others about operational plans or activities.” Mileage may vary depending on the business, but overall those productivity savings can add up to huge contributions to the economy.

While it’s one thing to speculate, extraordinary claims require extraordinary evidence. Where one would expect to see a rigorous methodology in the McKinsey report, however, we are instead told that the authors referenced a “proprietary database” and “drew on the experience of more than 100 experts,” none of whom are mentioned. In other words, while it certainly seems plausible that generative AI could add a lot of value to the global economy, when it comes to specific numbers, we’re just being asked to take McKinsey’s word for it. McKinsey are perceived by many to be experts, after all.

It often is, in general, perfectly rational to take an expert’s word for it, without having to examine their evidence in detail. Of course, whether McKinsey & Company really are experts when it comes to AI and financial predictions (or, really, anything else for that matter) is up for debate. Regardless, something is troubling about presenting one’s expert opinion in such a way that one could not investigate it even if one wanted to. Call this phenomenon black-box expertise.

Black-box expertise seems to be common and even welcomed in the discourse surrounding new developments in AI, perhaps due to an immense amount of hype and appetite for new information. The result is an arms race of increasingly hyperbolic articles, studies, and statements from legitimate (and purportedly legitimate) experts, ones that are often presented without much in the way of supporting evidence. A discourse that encourages black-box expertise is problematic, however, in that it can make the identification of experts more difficult, and perhaps lead to misplaced trust.

We can consider black-box expertise in a few forms. For instance, an expert may present a conclusion but not make available their methodology, either in whole or in part – this seems to be what’s happening in the McKinsey report. We can also think of cases in which experts might not make available the evidence they used in reaching a conclusion, or the reasoning they used to get there. Expressions of black-box expertise of these kinds have plagued other parts of the AI discourse recently, as well.

For instance, another expert opinion that has been frequently quoted comes from AI expert Paul Christiano, who, when asked about the existential risk posed by AI, claimed: “Overall, maybe we’re talking about a 50/50 chance of catastrophe shortly after we have systems at the human level.” It’s a potentially terrifying prospect, but Christiano is not forthcoming with his reasoning for landing on that number in particular. While his credentials would lead many to consider him a legitimate expert, the basis of his opinions on AI is completely opaque.

Why is black-box expertise a problem, though? One of the benefits of relying on expert opinion is that the experts have done the hard work in figuring things out so that we don’t have to. This is especially helpful when the matter at hand is complex, and when we don’t have the skills or knowledge to figure it out ourselves. It would be odd, for instance, to demand to see all of the evidence, or scrutinize the methodology of an expert who works in a field of which we are largely ignorant since we wouldn’t really know what we were looking at or how to evaluate it. Lest we be skeptics about everything we’re not personally well-versed in, reliance on expertise necessarily requires some amount of trust. So why should it matter how transparent an expert is about the way they reached their opinion?

The first problem is one of identification.  As we’ve seen, a fundamental challenge in evaluating whether someone is an expert from the point of view of a non-expert is that non-experts tend to be unable to fully evaluate claims made in that area of expertise. Instead, non-experts rely on different markers of expertise, such as one’s credentials, professional accomplishments, and engagement with others in their respective areas. Crucially, however, non-experts also tend to evaluate expertise on the basis of factors like one’s ability to respond to criticism, the provisions of reasons for their beliefs, and their ability to explain their views to others. These factors are directly at odds with black-box expertise: without making one’s methodology or reasoning apparent, it makes it difficult for non-experts to identify experts.

A second and related problem with black-box expertise is that it becomes more difficult for others to identify epistemic trespassers: those who have specialized knowledge or expertise in one area that make judgments on matters in areas where they lack expertise. Epistemic trespassers are, arguably, rampant in AI discourse. Consider, for example, a recent and widely-reported interview with James Cameron, the director of the original Terminator series of movies. When asked about whether he considered artificial intelligence to be an existential risk, he remarked, “I warned you guys in 1984, and you didn’t listen” (referring to the plot of the Terminator movies in which the existential threat of AI was very tangible). Cameron’s comment makes for a fun headline (one which was featured in an exhausting number of publications), but he is by no measure an expert in artificial intelligence in the year 2023. He may be an accomplished filmmaker, but when it comes to contemporary discussions of AI, he is very much an epistemic trespasser.

Here, then, is a central problem with relying on black-box expertise in AI discourse: expert opinion presented without transparent evidence, methodology, or reasoning can be difficult to distinguish from opinions of non-experts and epistemic trespassers. This can make it difficult for non-experts to navigate an already complex and crowded discourse to identify who should be trusted, and whose word should be taken with a grain of salt.

Given the potential of AI and its tendency to produce headlines that tout it both as a possible savior of the economy and destroyer of the world, being able to identify experts is an important part of creating a discourse that is productive and not simply motivated by fear-mongering and hype. Black-box expertise, like that one on display in the McKinsey report and many other commentaries from AI researchers, provides a significant barrier to creating that kind of discourse.

Ken Boyd holds a PhD in philosophy from the University of Toronto. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com
Related Stories