Back to Prindle Institute
Ethics in CultureHigher Education

The Digital Humanities: Overhyped or Misunderstood?

By Eric Walker
27 Nov 2017

A recent series of articles appearing in The Chronicle of Higher Education has reopened the discussion about the nature of the digital humanities. Some scholars argue the digital humanities are a boon to humanistic inquiry and some argue they’re a detriment, but all sides seem to agree it’s worth understanding just what the scope and ambition of the digital humanities is and ought to be.

What are the digital humanities? This is part of what’s contested, of course, so any answer is bound to displease someone. But the project is, basically, to harness computer power for the purpose of analysis, and to do so as part of recognizably traditional humanities scholarship. Rather than digital humanities, then, perhaps a better name for this kind of endeavor is computer-aided criticism or computer-aided interpretation.

As a typical example, consider Andrew Piper’s investigation into whether the fictionality of a piece of writing is a matter of elusive, context-sensitive things, like how the author intends her words and how the audience takes them, or whether it’s a matter of the words themselves. Rather than drawing conclusions from a few paradigmatic texts, Piper employed a computer to look at about 28,000 of them, calling upon a context-blind computerized pattern-recognition system to see if it could tell, on the basis of diction alone, whether a piece was fiction or nonfiction.

Computer-aided projects like this have attracted a lot of attention over the last decade. They’ve also attracted a lot of funding and tenure-track job openings, in a time when both are scarce. They give off, and are often meant to give off, a whiff of science-like rigor, which enchants administrators, donors, and grantmakers. Naturally, questions have arisen about why, or whether, these projects deserve what they’re receiving.

Timothy Brennan threw down the most recent gauntlet, expressing his skepticism about the digital humanities in a polemical article, “The Digital-Humanities Bust.” His complaints are several, but two stand out. First, Brennan thinks that the questions whose answers are computable aren’t the interesting ones haunting humanities scholars. “[T]he technology demands that it be asked only what it can answer, changing the questions to conform to its own limitations.” The fact that the word ‘whale’ appears some number of times in Moby Dick, Brennan snarks, means only that the word ‘whale’ appears that many times in Moby Dick.

Second, according to Brennan, the digital humanities threaten to replace thinking with picturing. “The digital humanities makes a rookie mistake: it confuses more information for more knowledge.” As Brennan reports, the authors of a recent edited volume collecting papers about the digital humanities “summarize the intellectual activities they promote: ‘digitization, classification, description and metadata, organization, and navigation.’ An amazing list,” Brennan continues, “which leaves out [. . .] what is normally called thinking.” He laments that “[c]omputer circuits may be lightning fast, but they preclude random redirections of inquiry. By design, digital ‘reading’ obviates the natural intelligence of the brain making leaps [. . .] and rushing instinctively to where it means to go.”

At a certain level of granularity, Brennan’s points find some traction. But he’s zooming in when he should be zooming out. The questions that the digital humanities conscript computers to answer, constrained though they may be, surely gain their significance by being part of larger projects defined by larger questions. And, just as surely, these larger questions may recognizably belong to any of the various traditions of humanities scholarship. This way, computer-aided criticism may be seen to  work in concert with more traditional criticism.

Such, anyway, seems to be the upshot of a response to Brennan’s article, “‘Digital’ Is Not the Opposite of ‘Humanities’,” by Sarah E. Bond, Hoyt Long, and Ted Underwood. And while their response is an explicit rejoinder to Brennan’s first point — the uninteresting narrowness of computably answerable questions — it is also an implicit rejoinder to Brennan’s second point, that algorithmically begotten representations are supplanting thinking. For if computer-aided criticism becomes relevant by working alongside more traditional criticism, then the digital humanities can’t forego the kind of imaginative thinking that Brennan advocates.

Consider a 2016 Atlantic article, in which champions of the digital humanities Richard Jean So and Andrew Piper outlined their project of measuring the impact MFA programs make in the literary world. They described the first part of their project this way: “We began by looking at writers’ diction: whether the words used by MFA writers are noticeably different from those of their non-MFA counterparts. Using a process known as machine learning, we first taught a computer to recognize the words that are unique to each of our groups and then asked it to guess whether a novel (that it hasn’t seen before) was written by someone with an MFA.”  

Notice what had to take place prior to the computational wizardry: a human being had to classify words according to their perceived uniqueness. Fed into the computer were human judgments, encoded. So the picture that emerges from such computation isn’t — or, at least, isn’t originally — so much a representation of the similarities and differences between two kinds of novel as a representation of human judgments about the similarities and differences between two kinds of novel. Importantly, whether the former similarities and differences find genuine representation in the resulting picture depends upon whether the judgments made prior to computation were sagacious and keen.

Thus, the feature of algorithmic mapping that promises and often delivers distinctive illumination — its massive, cyclopean computational power — can also become its weakness. For the picture produced by computation is no more and no less authoritative than the human judgments supplying the input. As Brennan notes, algorithms encode processes that exclude the possibility of thoughtful deviation. So any narrowness, incautiousness, or bias informing the judgments will be unthinkingly inherited by the algorithmically generated representation.  

For example, as they admit in their article, So and Piper “only gathered novels by non-MFA writers that were reviewed in The New York Times.” Being reviewed in the Times isn’t necessarily a misleading a mark of literary excellence, but it’s certainly a narrow one. Such narrowness in judgment isn’t overcome simply by becoming computationally analyzed. And, as Brennan points out, it constrains the significance of the resulting picture.

This constraint acknowledged, though, humanities scholars would have the opportunity to explore and debate the scope and limits of the picture’s significance and to suggest new directions for research. That is, they’d have the opportunity to exercise skills similar to, if not identical with, the ones they’ve honed in their academic training.   

All of this just highlights the importance of expert judgment to the flourishing of the digital humanities. Brennan is mistaken: computer-aided interpretation, far from obviating the need for competent, astute, traditionally forged critical judgment, actually reinforces that need. Thinking doesn’t stop when scholars apply computational methods in their work; indeed, it must precede and follow such application.

Algorithmicity may not automatically increase objectivity, but it can be uniquely illuminating. Computer-generated representations can provide fresh perspectives on subject matters old and new because computers have the power to compile, integrate, and re-articulate knowledge that would, perhaps, otherwise remain un-governably dispersed across time and space. But it’s up to humanities scholars to assess and convey the significance of these synoptic representations responsibly. Diagrams, tables, graphs, and charts do not speak for themselves.

What tends to rankle critics of a young, upstart practice is not what it does or doesn’t accomplish, but what its most enthusiastic exponents say it accomplishes. Well-meaning ardor can let what’s promised outrun what’s delivered. In a time when the humanities are fighting for their life, though, we should forgive these exponents their hyperbole and their aspiration to scientificness.  

But administrators, donors, and grant-makers should also recognize that computational analysis doesn’t inevitably confer rigor and objectivity on its results. A computer-generated representation is only as unbiased as the principled and communicable judgments and sensibilities of the practitioners who make it part of their research program. And it is this feature that the humanities, digital and otherwise, actually share with the sciences.

Eric Walker is a doctoral candidate in philosophy at the University of California, Riverside, writing a dissertation articulating the sense in which a formal symbolism serves as a medium for mathematical investigation. He teaches German idealism, romanticism, existentialism, phenomenology, ethics and the meaning in life, art and aesthetics, formal and informal reasoning, the history of analytic philosophy, and the history and philosophy of science, mathematics, logic, and technology.
Related Stories