← Return to search results
Back to Prindle Institute

Honesty in Academia

photograph of Harvard's coat of arms

Honesty researcher Francesca Gino, a professor at Harvard Business School, has been accused of fabricating data in multiple published articles.

In one study, participants were given 20 math puzzles and awarded $1 for each one they solved. After grading their own worksheets, test subjects then threw them out and reported their results on another form. Some participants were asked to sign to confirm that their report was accurate at the bottom of the form, while others signed at the top. Gino’s hypothesis was that signing at the top would prime honest behavior, but she then allegedly tampered with the results to drive the intended effect. Gino is now on administrative leave while Harvard conducts a full investigation.

While it would obviously be ironic if Gino had been dishonest while researching honesty, there is a further reason that such dishonesty would be particularly galling, as dishonest research violates one of the cardinal virtues of the academic vocation.

Let me explain. Some readers might already be familiar with the traditional list of the cardinal virtues: Justice, Courage, Prudence, and Temperance. Honesty, of course, is nowhere on this list. So what do I mean when I call honesty a cardinal virtue?

Different vocations have their own characteristic virtues. It is not possible to be a good judge without being particularly just. Likewise, it is not possible to be a good soldier on the front lines without being particularly courageous. That is because each of these vocations emphasize certain virtues. A soldier must have the virtue of courage to repeatedly thrust themselves into battle, and a judge must have the virtue of justice in order to consistently reach fair verdicts.

Are there any characteristic virtues of the academic vocation? Professors typically have two primary tasks: the generation and transmission of knowledge. For both of these tasks, an emphasis on truth takes center stage. And this focus on truth means that professors will do better at both of these tasks by cultivating the intellectual virtues – virtues like open-mindedness, curiosity, and intellectual humility. For this reason, we can think of these intellectual virtues as cardinal virtues of the academic vocation.

But along with these intellectual virtues, honesty is also particularly important for the academic vocation. When students learn from their professors, they often simply take them at their word. Professors are the experts, after all. This makes students especially vulnerable, because if their professors deceive them, they cannot detect it.

This is true to an even greater extent with cutting-edge research. If professors are being dishonest, it could be that no intellectual discoveries are being made in the first place. In Gino’s case, for example, she may have concealed the fact that the study she performed did not actually support her findings. But without specialized training, few people can understand how new knowledge is generated in the first place, leaving them completely vulnerable to the possibility of academic dishonesty. Only other academics were able to spot the irregularities in Gino’s data that has led to further questions.

We thus have reason to take honesty as a cardinal virtue of the academic vocation as well. Not only do academics need to be open-minded, curious, and humble, but they must also be honest so that they use their training to further higher education’s most important goals. If academics regularly passed off false research and deceived their students, it would threaten to undermine the university enterprise altogether.

Distrust in higher education is on the rise, and to the extent that academics acquire a reputation for dishonesty, it is sure to only decline further. Gino’s work is just the tip of the iceberg. One of Gino’s co-authors has also been accused of faking his data, and Stanford’s president is stepping down due to questions about his research, but these are isolated incidents in comparison to the widespread replication crisis. When researchers tried to reproduce the results from 98 published psychology papers, only 39 of the studies were able to be replicated, meaning that over half of the “research” led to no new discoveries whatsoever.

While a failure of replication does not necessarily mean that the researchers who produced that work were being dishonest, there are many dishonest means that can lead to a study that can’t be replicated, including throwing out data that does not confirm a hypothesis or questionable methods of data analysis. Until the replication crisis, and discoveries of fake data, begin to wane, it will be difficult to restore public trust in social science research.

Is there anything that can be done? While public trust in higher education will not be restored overnight, there are several changes that could potentially help professors cultivate the virtue of honesty. One strategy for curbing our vices is limiting the situations in which we are tempted to do the wrong thing. As one example, pre-registering a study commits a researcher to the design of a study before they run it, removing the opportunity to engage in questionable statistical analysis or disregard the results.

Another way to increase virtuous behavior is to remind ourselves of our values. At the college level, for instance, commitment to an honor code can serve as a moral reminder that reduces cheating. Academic institutions or societies could develop honor codes that academics have to sign in order to submit to journals, or even a signed honor code that is displayed on published articles. While some professors might still be undeterred, others will be reminded of their commitment to the moral values inherent to their vocation.

Universities could also reconsider which professors they hold up as exemplars. For many academic disciplines, researchers that produce the most surprising results, and produce them on a regular basis, are held up as the ideal. But this of course increases the incentive to fudge the numbers to produce interesting “research.” By promoting and honoring professors that have well-established, replicable research, colleges and universities could instead encourage results that will stand the test of time.

None of these solutions is perfect, but by adopting a combination of measures, academics can structure their vocation so that it is more conducive to the development of honesty. It is impossible to eliminate all opportunities for dishonesty, but by creating a culture of honesty and transparency, professors can restore trust in the research they publish and in higher education more generally.

For her 2018 book, Rebel Talent, Francesca Gino opted for the tagline “Why it pays to break the rules at work and life.” The jury is still out on whether that was true in Gino’s case. If she was dishonest, it enabled her to ascend the ranks, landing at the top of the ladder as a professor at Harvard. To prevent more accusations like these moving forward, universities need to put in the work to ensure that honesty is what’s rewarded in academia.

 

This work was supported by the John Templeton Foundation grant “The Honesty Project” (ID#61842). Nevertheless, the opinions expressed here are those of the author and do not necessarily reflect the views of the Foundation.

Academic Work and Justice for AIs

drawing of robot in a library

As the U.S. academic year draws to a close, the specter of AI-generated essays and exam answers looms large for teachers. The increased use of AI “chatbots” has forced a rapid and fundamental shift in the way that many schools are conducting assessments, exacerbated by the fact that – in a number of cases – they have been able to pass all kinds of academic assessments. Some colleges are now going so far as to offer amnesty for students who confess to cheating with the assistance of AI.

The use of AI as a novel plagiarism tool has all kinds of ethical implications. Here at The Prindle Post, Richard Gibson previously discussed how this practice creates deception and negatively impacts education, while D’Arcy Blaxell instead looked at the repetitive and homogeneous nature of the content they will produce. I want to focus on a different question, however – one that, so far, has been largely neglected in ethical discussions of the ethics of AI:

Does justice demand that AIs receive credit for the academic work they create?

The concept of “justice” is a tricky one. Though, at its simplest, we might understand justice merely as fairness. And many of us already have an intuitive sense of what this looks like. Suppose, for example, that I am grading a pile of my students’ essays. One of my students, Alejandro, submits a fantastic essay showing a masterful understanding of the course content. I remember, however, that Alejandro has a penchant for wearing yellow t-shirts – a color I abhor. For this reason (and this reason alone) I decide to give him an “F.” Another student of mine, Fiona, instead writes a dismal essay that shows no understanding whatsoever of anything she’s been taught. I, however, am friends with Fiona’s father, and decide to give her an “A” on this basis.

There’s something terribly unfair – or unjust – about this outcome. The grade a student receives should depend solely on the quality of their work, not the color of their T-shirt or whether their parent is a friend of their teacher. Alejandro receives an F when he deserves an A, while Fiona receives an A when she deserves an F.

Consider, now, the case where a student uses an AI chatbot to write their essay. Clearly, it would be unjust for this student to receive a passing grade – they do not deserve to receive credit for work that is not their own. But, then, who should receive credit? If the essay is pass-worthy, then might justice demand that we award this grade to the AI itself? And if that AI passes enough assessments to be awarded a degree, then should it receive this very qualification?

It might seem a preposterous suggestion. But it turns out that it’s difficult to explain why justice would not claim as much.

One response might be to say that the concept of justice don’t apply to AIs because AIs aren’t human. But this relies on the very controversial assumption that justice only applies to Homo sapiens – and this is a difficult claim to make. There is, for example, a growing recognition of the interests of non-human animals. These interests make appropriate the application of certain principles of justice to those animals, arguing – for example – that it is unjust for an animal to suffer for the mere amusement of a human audience. Restricting our discussions of justice to humans would preclude us from making claims like this.

Perhaps, then, we might expand our considerations of justice to all beings that are sentient – that is, those that are able to feel pain and pleasure. This is precisely the basis of Peter Singer’s utilitarian approach to the ethical treatment of animals. According to Singer, if an animal can experience pleasure, then it has an interest in pursuing pleasure. Likewise, if something can experience pain, then it has an interest in avoiding pain. These interests then form the basis of ways in which it is just or unjust to treat not just humans, but non-human animals too. AIs are not sentient (at least, not yet) – they can experience neither pain nor pleasure. This, then, might be an apt basis on which to exclude them from our discussions of justice. But here’s the thing: we don’t want to make sentience a prerequisite for justice. Why not? Because there are many humans who also lack this feature. Consider, for example, a comatose patient or someone with Congenital Pain Insensitivity. Despite the inability of these individuals to experience pain, it would seem unjust to, say, deprive them of medical treatment. Given this, then, sentience cannot be necessary for the application of justice.

Consider, then, a final alternative: We might argue that justice claims are inapplicable to AIs not because they aren’t human or sentient, but because they fail to understand what they write. This is a perennial problem for AIs, and is often explained in terms of the distinction between the syntax (structure) and semantics (meaning) of what we say. Computer programs – by their very nature – run on input/output algorithms. When, for example, a chatbot receives the input “who is your favourite band?” it is programmed to respond with an appropriate output such as “my favorite band is Rage Against the Machine.” Yet, while the structure (i.e., syntax) of this response is correct, there’s no meaning (i.e., semantics) behind the words. It doesn’t understand what a “band” or a “favorite” is. And when it answers with “Rage Against the Machine”, it is not doing so on the basis of its love for the anarchistic lyrics of Zach de la Rocha, or the surreal sonifications of guitarist Tom Morello. Instead, “Rage Against the Machine” is merely a string of words that it knows to be an appropriate output when given the input “who is your favourite band?” This is fundamentally different to what happens when a human answers the very same question.

But here’s the thing: There are many cases where a student’s understanding of a concept is precisely the same as an AI’s understanding of Rage Against the Machine.

When asked what ethical theory Thomas Hobbes was famous for, many students can (correctly) answer “Contractarianism” without any understanding of what that term means. They have merely learned that this is an appropriate output for the given input. What an AI does when answering an essay or exam question, then, might not be so different to what many students have done for as long as educational institutions have existed.

If a human would deserve to receive a passing grade for a particular piece of academic work, then it remains unclear why justice would not require us to award the same grade to an AI for the very same work. We cannot exclude AIs from our considerations of justice merely on the basis that they lack humanity or sentience, as this would also require the (unacceptable) exclusion of many other beings such as animals and coma patients. Similarly, excluding AIs on the basis that they do not understand what they are writing would create a standard that even many students would fall short of. If we wish to deny AIs credit for their work, we need to look elsewhere for a justification.

The Ethics of Self-Citation

image of man in top hat on pedestal with "EGO" sash

In early 2021, the Swiss Academies of Arts and Sciences (SAAS) published an updated set of standards for academic inquiry; among other things, this new “Code of Conduct for Scientific Integrity” aims to encourage high expectations for academic excellence and to “help build a robust culture of scientific integrity that will stand the test of time.” Notably, whereas the Code’s previous version (published in 2008) treated “academic misconduct” simply as a practice based on spreading deceptive misinformation (either intentionally or due to negligence), the new document expands that definition to include a variety of bad habits in academia.

In addition to falsifying or misrepresenting one’s data — including various forms of plagiarism (one of the most familiar academic sins) — the following is a partial list of practices the SAAS will now also consider “academic misconduct”:

  • Failing to adequately consider the expert opinions and theories that make up the current body of knowledge and making incorrect or disparaging statements about divergent opinions and theories;
  • Establishing or supporting journals or platforms lacking proper quality standards;
  • Unjustified and/or selective citation or self-citation;
  • Failing to consider and accept possible harm and risks in connection with research work; and
  • Enabling funders and sponsors to influence the independence of the research methodology or the reporting of research findings.

Going forward, if Swiss academics perform or publish research failing to uphold these standards, they might well find themselves sanctioned or otherwise punished.

To some, these guidelines might seem odd: why, for example, would a researcher attempting to write an academic article not “adequately consider the expert opinions and theories that make up the current body of knowledge” on the relevant topic? Put differently: why would someone seek to contribute to “the current body of knowledge” without knowing that body’s shape?

As Katerina Guba, the director of the Center for Institutional Analysis of Science and Education at the European University at St. Petersburg, explains, “Today, scholars have to publish much more than they did to get an academic position. Intense competition leads to cutting ethical corners apart from the three ‘cardinal sins’ of research conduct — falsification, fabrication and plagiarism.” Given the painful state of the academic job market, researchers can easily find incentives to pad their CVs and puff up their resumes in an attempt to save time and make themselves look better than their peers vying for interviews.

So, let’s talk about self-citation.

In general, self-citation is simply the practice of an academic who cites their own work in later publications they produce. Clearly, this is not necessarily ethically problematic: indeed, in many cases, it might well be required for a researcher to cite themselves in order to be clear about the source of their data, the grounding of their argument, the development of the relevant dialectical exchange, or many other potential reasons — and the SAAS recognizes this. Notice that the new Code warns against “unjustified and/or selective citation or self-citation” — so, when is self-citation unjustified and/or unethical?

Suppose that Moe is applying for a job and lists a series of impressive-sounding awards on his resume; when the hiring manager double-checks Moe’s references, she confirms that Moe did indeed receive the awards of which he boasts. But the manager also learns that one of Moe’s responsibilities at his previous job was selecting the winners of the awards in question — that is to say, Moe gave the awards to himself.

The hiring manager might be suspicious of at least two possibilities regarding Moe’s awards:

  1. It might be the case that Moe didn’t actually deserve the awards and abused his position as “award-giver” to personally profit, or
  2. It might be the case that Moe could have deserved the awards, but ignored other deserving (potentially more-deserving) candidates for the awards that he gave to himself.

Because citation metrics of publications are now a prized commodity among academics, self-citation practices can raise precisely the same worries. Consider the h-index: a score for a researcher’s publication record determined by a function of their total number of publication credits and how often their publications have been cited in other publications. In short, the h-index claims to offer a handily quantified measurement of how “influential” someone has been on their academic field.

But, as C. Thi Nguyen has pointed out, these sorts of quantifications not only reduce complicated social phenomena (like “influence”) to thinned-out oversimplifications, but they can be gamified or otherwise manipulated by clever agents who know how to play the game in just the right way. Herein lies one of the problems of self-citations: an unscrupulous academic can distort their own h-index scores (and other such metrics) to make them look artificially larger (and more impressive) by intentionally “awarding themselves” with citations just like Moe granted himself awards in Situation #1.

But, perhaps even more problematic than this, self-citations limit the scope of a researcher’s attention when they are purporting to contribute to the wider academic conversation. Suppose that I’m writing an article about some topic and, rather than review the latest literature on the subject, I instead just cite my own articles from several years (or several decades) ago: depending on the topic, it could easily be the case that I am missing important arguments, observations, or data that have been made in the interim period. Just like Moe in Situation #2, I would have ignored other worthy candidates for citation to instead give the attention to myself — and, in this case, the quality of my new article would suffer as a result.

For example, consider a forthcoming article in the Monash Bioethics Review titled “Can ‘Eugenics’ Be Defended?” Co-written by a panel of six authors, many of whom are well-known in their various fields, the 8-page article’s reference list includes a total of 34 citations — 14 of these references (41%) were authored by one or more of the article’s six contributors (and 5 of them are from the lead author, making him the most-cited researcher on the reference list). While the argument of this particular publication is indeed controversial, my present concern is restricted to the article’s form, rather than its contentious content: the exhibited preference to self-cite seems to have led the authors to ignore almost any bioethicists or philosophers of disability who disagree with their (again, extremely controversial) thesis (save for one reference to an interlocutor of this new publication and one citation of a magazine article). While this new piece repeatedly cites questions that Peter Singer (one of the six co-authors) asked in the early 2000s, it fails to cite any philosophers who have spent several decades providing answers to those very questions, thereby reducing the possible value of its purported contributions to the academic discourse. Indeed, self-citation is not the only dysgenic element of this particular publication, but it is one trait that attentive authors should wish to cull from the herd of academic bad habits.

Overall, recent years have seen just such an increased interest among academics about the sociological features of their disciplinary metrics, with several studies and reports being issued about the nature and practice of self-citation (notably, male academics — or at least those without “short, disrupted, or diverse careers” — seem to be far more likely to self-cite, as are those under pressure to meet certain quantified productivity expectations). In response, some have proposed additional metrics to specifically track self-citations, alternate metrics intended to be more balanced, and upending the culture of “curated scorekeeping” altogether. The SAAS’s move to specifically highlight self-citation’s potential as professional malpractice is another attempt to limit self-serving habits that can threaten the credibility of academic claims to knowledge writ large.

Ultimately, much like the increased notice that “p-hacking” has recently received in wider popular culture — and indeed, the similar story we can tell about at least some elements of “fake news” development online —  it might be time to have a similarly wide-spread conversation about how people should and should not use citations.

Some Ethical Problems with Footnotes

scan of appendix title page from 1978 report

I start this article with a frank confession: I love footnotes; I do not like endnotes.
Grammatical quarrels over the importance of the Oxford comma, the propriety of the singular “they,” and whether or not sentences can rightly end with a preposition have all, in their own ways and for their own reasons, broken out of the ivory tower. However, the question of whether a piece of writing is better served with footnotes (at the bottom of each page) or endnotes (collected at the end of the document) is a dispute which, for now, remains distinctly scholastic.1 Although, as a matter of personal preference, I am selfishly partial to footnotes, I must admit – and will hereafter argue – that, in some situations, endnotes can be the most ethical option for accomplishing a writer’s goal; in others, eliminating the note entirely is the best option.
As Elisabeth Camp explains in a TED Talk from 2017, just like a variety of rhetorical functions in normal speech, footnotes typically do four things for a text:

  1. they offer a quick method for citing references;
  2. they supplement the footnoted sentence with additional information that, though interesting, might not be directly relevant to the essay as a whole,
  3. they evaluate the point made by the footnoted sentence with quick additional commentary or clarification, and
  4. they extend certain thoughts within the essay’s body in speculative directions without trying to argue firmly for particular conclusions.

For each of these functions (though, arguably less so for the matter of citation), the appositive commentary is most accessible when directly available on the same page as the sentence to which it is attached; requiring a reader to turn multiple pages (rather than simply flicking their eyes to the bottom of the current page) to find the note erects a barrier that, in all likelihood, leads to many endnotes going unread. As such, one might argue that if notes are to be used, then they should be easily usable and, in this regard, footnotes are better than endnotes.
However, this assumes something important about how an audience is accessing a piece of writing: as Nick Byrd has pointed out, readers who rely on text-to-speech software are often presented with an unusual barrier precisely because of footnotes when their computer program fails to distinguish between text in the main body of the essay versus text elsewhere. Imagine trying to read this page from top to bottom with no attention to whether some portions are notes or not:

(From The Genesis of Yogācāra-Vijñānavāda: Responses and Reflections by Lambert Schmithausen; thanks to Bryce Huebner for the example)
Although Microsoft Office has available features for managing the flow of its screen reader program for Word document files, the fact that many (if not most) articles and books are available primarily in .pdf or .epub formats means that, for many, heavily footnoted texts are extremely difficult to read.
Given this, two solutions seem clear:

  1. Improve text-to-speech programs (and the various other technical apparatuses on which they rely, such as optical character recognition algorithms) to accommodate heavily footnoted documents.
  2. Diminish the practice of footnoting, perhaps by switching to the already-standardized option of endnoting.

And, given that (1) is far easier said than done, (2) may be the most ethical option in the short term, given concerns about accessibility.
Technically, though, there is at least one more option immediately implementable:
3. Reduce (or functionally eliminate) current academic notation practices altogether.
While it may be true that authors like Vladimir Nabokov, David Foster Wallace, Susanna Clarke, and Mark Z. Danielewski (among plenty of others) have used footnotes to great storytelling effect in their fiction, the genre of the academic text is something quite different. Far less concerned with “world-building” or “scene-setting,” an academic book or article, in general, presents a sustained argument about, or consideration of, a focused topic – something that, arguably, is not well-served by interruptive notation practices, however clever or interesting they might be. Recalling three of Camp’s four notational uses mentioned above, if an author wishes to provide supplementation, evaluation, or extension of the material discussed in a text, then that may either need to be incorporated into the body of the text proper or reserved for a separate text entirely.
Consider the note attached to the first paragraph of this very article – though the information it contains is interesting (and, arguably, important for the main argument of this essay), it could potentially be either deleted or incorporated into the source paragraph without much difficulty. Although this might reduce the “augmentative beauty” of the wry textual aside, it could (outside of unusual situations such as this one where a footnote functions as a recursive demonstration of its source essay’s thesis) make for more streamlined pieces of writing.
But what of Camp’s first function for footnotes: citation? Certainly, giving credit fairly for ideas found elsewhere is a crucial element of honest academic writing, but footnotes are not required to accomplish this, as anyone familiar with parenthetical citations can attest (nor, indeed, are endnotes necessary either). Consider the caption to the above image of a heavily footnoted academic text (as of page 365, the author is already up to note 1663); anyone interested in the source material (both objectively about the text itself and subjectively regarding how I, personally, learned of it) can discover this information without recourse to a foot- or endnote. And though this is a crude example (buttressed by the facility of hypertext links), it is far from an unusual one.
Moreover, introducing constraints on our citation practices might well serve to limit certain unusual abuses that can occur within the system of academic publishing as it stands. For one, concerns about intellectual grandstanding already abound in academia; packed reference lists are one way that this manifests. As Camp describes in her presentation,

“Citations also accumulate authority; they bring authority to the author. They say ‘Hey! Look at me! I know who to cite! I know the right people to pay attention to; that means I’m an authority – you should listen to what I have to say.’…Once you’ve established that you are in the cognoscenti – that you belong, that you have the authority to speak by doing a lot of citation – that, then, puts you in a position to use that in interesting kinds of ways.”

Rather than using citations simply to give credit where it is due, researchers can sometimes cite sources to gain intellectual “street cred” (“library-aisle cred”?) for themselves – a practice particularly easy in the age of the online database and particularly well-served by footnotes which, even if left unread, will still lend an impressive air to a text whose pages are packed with them. And, given that so-called bibliometric data (which tracks how and how frequently a researcher’s work is cited) is becoming ever-more important for early-career academics, “doing a lot of citation” can also increasingly mean simply citing oneself or one’s peers.
Perhaps the most problematic element of citation abuse, however, stems from the combination of easily-accessed digital databases with lax (or over-taxed) researchers; as Ole Bjørn Rekdal has demonstrated, the spread of “academic urban legends” – such as the false belief that spinach is a good source of iron or that sheep are anatomically incapable of swimming – often come as a result of errors that are spread through the literature, and then through society, without researchers double-checking their sources. Much like a game of telephone, sloppy citation practices allow mistakes to survive within an institutionally-approved environment that is, in theory, designed to squash them. And while sustaining silly stories about farm animals is one thing, when errors are spread unchecked in a way that ultimately influences demonstrably harmful policies – as in the case of a 101-word paragraph cited hundreds of times since its publication in a 1979 issue of the New England Journal of Medicine which (in part) laid the groundwork for today’s opioid abuse crisis – the ethics of citations become sharply important.
All of this is to say: our general love for academic notational practices, and my personal affinity for footnotes, are not neutral positions and deserve to be, themselves, analyzed. In matters both epistemic and ethical, those who care about the accessibility and the accuracy of a text would do well to consider what role that text’s notes are playing – regardless of their location on a given page.
 
1  Although there have been a few articles written in recent years about the value of notes in general, the consistent point of each has been to lament a perceived downturn amongst the general attitude regarding public disinformation (with the thought that notes of some kind could help to combat this). None seem to specifically address the need for a particular location of notes within a text.

Classics in the Era of Trump

Photograph of a bookshelf of uniform "harvard classic" books; visible titles are Don Quixote and The Aeneid

Classical studies, generally thought of as an elite and isolated corner of academic study, has been surprisingly prominent in headlines over the last few years. Victor Davis Hanson, conservative classical scholar and senior fellow at Stanford’s Hoover Institution, has a new book coming out in March 2019, in which he draws parallels between ancient and contemporary politics. In The Case For Trump, as he explained in an interview with The New Yorker, he argues that we ought to think of Donald Trump as a tragic hero straight from the pages of Greek drama. The tragic hero, he says, is defined not by their bravery or altruism. Rather, “the natural expression of their personas can only lead to their own destruction or ostracism from an advancing civilization that they seek to protect. And yet they willingly accept the challenge of service.” As Hanson defines them, heroes are those who solve problems at the risk of vilification, which is exactly what he sees Trump as doing.

In all tragedies, Hanson explains further, “the community doesn’t have the skills or doesn’t have the willpower or doesn’t want to stoop to the corrective method to solve the existential problem,” so the community brings in an outsider, someone willing to get their hands dirty. Hanson is coy about what exactly our “existential problem” is, but the ambiguity is dispelled when he launches into an ill-informed and biased polemic against Mexican immigrants. We’re also left to wonder what “community” he’s referring to, as if the country wasn’t deeply fractured across political lines during and after the presidential election. When did all of us collectively agree that Trump was a necessary evil? Ultimately, we’re left to scratch our heads and ask ourselves why we ought to listen to a classical scholar’s opinion on politics and immigration at all.

The specific argument of his book is perhaps less important than the fact that a classical scholar is presenting an argument about modern politics. Classics has a reputation for being a bulwark of conservatism within academia and culture at large, a tool for enforcing power rather than dismantling it. This understanding is becoming less accurate as the discipline expands; writers from Virginia Woolf to Michel Foucault have used classical literature and mythology to challenge the hegemony of Christian belief (especially in relation to gender and sexuality), and scholars from increasingly diverse backgrounds contribute to ongoing research and debate. Emily Wilson’s version of the The Odyssey, the first English translation of the epic poem by a woman, was released only last year, an indication of how the demographic makeup of classical studies is shifting. Still, elements of conservatism persist within the field. The question becomes whether we should tell classical scholars to “stick to writing papers” (or whatever the equivalent here would be of telling football players to only focus on sports) without running the risk of anti-intellectualism. What do we gain and lose by these historical comparisons, and do they enrich or limit our political discussions?

In many cases, this discourse serves to express anxieties over the “fall of Western civilization.” Ancient Rome and Greece are well-established cultural touchstones, the foundation of our political institutions and beliefs. We want to place this tumultuous moment within a kind of historical continuity, which serves to both reify it and hold it at a safe distance.

This was evident in the production of Shakespeare’s Julius Caesar that caused in controversy in June of 2017. The director created unmistakable parallels between Trump and Caesar, even giving Calpurnia, Caesar’s wife, an Eastern-European accent. Gregg Henry, the actor playing Caesar, told The Washington Post that the point of the play is “that when a tyrant comes to power and the way you fight that tyrant, it’s very important how you then try to deal with the problem because if you don’t deal with the problem in a proper way, you can end up losing democracy for like, 2000 years.” It’s debatable whether this production is truly referencing classical antiquity or the English literary canon (are we reaching for Shakespeare as a touchstone here or Roman politics, or something else, that nebulous thing called Art?). Either way, the production lended a historical importance to our present moment that both paid homage to the particulars and lent it a timeless and universal dimension. One could argue that Hanson’s book serves a similar function, albeit with a different agenda. He’s trying to understand the Trump presidency through Greek mythology, to explain Trump as an archetypal figure. He pins him down as a definitive “type” while glossing over certain individual facets of Trump’s character (namely, racism, misogyny, and financial greed).

The intersection between classical studies and modern politics also reflects growing anxieties over populism. When democracy falters, we rush back to the source to understand what is happening and why. David Stuttard, scholar and Fellow of Goodenough College, London, published a book in late 2018 that served just that purpose. In Nemesis: Alcibiades and the Fall of Athens, he writes that Alcibiades, a divisive Athenian statesman associated with the disintegration of Athenian democracy, wanted to “Make Athens Great Again.” Stuttard calls him the “Donald Trump of Ancient Greece” in another article, further driving home the point. While the comparison (an imperfect one, as pointed out by Ryan Shinkel in the LA Review of Books) is hardly the crux of Stuttard’s book, it is certainly another attempt to bring the past into the present, to make sense of 21st century populism by looking backwards. We see surface-level similarities, “strong men” shaping history, populist politics driven by forceful personalities, and the connections practically make themselves.

These are, in a sese, old problems amplified in our era but not altered beyond recognition. As famed classical scholar Mary Beard points out in her book SPQR, anxiety over shifting boundaries and national identity, of what it means to be a citizen in an ever-expanding world, is a question of perennial concern. Some classical scholars have even used global warming to link our world with antiquity; Kyle Harper’s The Fate of Rome: Climate, Disease, & the End of an Empire examines the role climate change (albeit climate change beyond the control of the Romans) had in the fall of the Roman Empire, prompting us to consider the impact global warming might have on contemporary politics.

Most of this discourse relies on view of antiquity as a place of primacy, of visceral and material immediacy. Most of us assume that ancient history tells us what universal behavior is, that it gives us a no-frills look at human nature and is therefore useful for navigating our current political climate. This viewpoint assumes, however, that our experience of reality isn’t shaped by historically-specific institutions and social movements. Dr. Richard Cherwitz, a professor of rhetoric at the University of Texas at Austin, wrote an article in 2017 called “Why Classical Theories of Rhetoric Matter in the Trump Presidency” in which we see such thinking at work. He asserts, “While we think our discourse today is unique to the times and circumstances in which we live, the reality is that patterns of thinking and talking are inherent in the human condition and therefore may be time invariant.” He zeroes in on the Roman idea stasis, an ancient Roman theory by lawyers to assess guilt in the courtroom, specifically determining guilt from the way someone behaves. He writes,

“Many legal observers and members of the media reasonably ask: If Trump isn’t guilty of wrongdoing and subsequently of covering it up, why would he say and do the things he does?  After all, as the Romans knew, only a guilty person would behave that way. […] [This] indicates why we should remind ourselves and our students that the ways we think and argue are deeply rooted in the human condition and are explained by the rhetoricians who lived thousands of years ago.”

In other words, Cherwitz says, there is such thing as universal behavior, and the human condition (and rhetoric, a practice shaped by centuries of discourse, education, and specifically Western understandings of the public sphere) has remained virtually unaltered since the Roman Republic.

So what do these comparisons mean as a whole, and is it entirely ethical for us to make them? On the one hand, scholars are working to untangle the often inscrutable world of modern politics, to provide some solid ground in a civilization that seems to be losing faith in itself. They are reacting to and attempting to remedy our cultural anxiety, which can hardly be condemned. On the other hand, a troublingly one-dimensional view of the current administration can be gleaned in many of these examples. It is a gross oversimplification of reality to claim that authoritarianism, white supremacy, and discrimination against minorities are rooted in basic “human nature”. This pushes the workings of very specific historical processes and institutions to the background, erasing centuries of structural oppression and sidelining factors like class and gender. In that sense, comparisons with the ancient world can be employed as a tactic to deflect rather than elucidate, to shift the blame for our current political climate to human nature, something that is fundamental and immune to the influence of power.

We see a particularly insidious example of this in Hanson’s New Yorker interview, in which he essentially parrots the president’s famously blasé remarks on the Charlottesville riots. He argues that the Alt-right isn’t “monolithic,” that it’s more or less made up of unknowable people with no discernible common ground. In his view, they become a shifting amorphous crowd with no ideological foundation, and are therefore without personal responsibility.

Classical scholars, not without exceptions, generally speak from a position of privilege and are considered worthy of being listened to. We certainly shouldn’t tell them to stick to academic conferences and keep out of politics, as that places limits on the scope of our political discourse, but we ought to remind ourselves of the prestige enjoyed by classical scholars the next time we criticize an athlete (usually a non-white athlete) for “stepping out of line” and speaking out about oppression.

Classical studies is a deeply fascinating and multifaceted field, and includes scholars from all backgrounds and political opinions. It can be both a hotly-contested battleground and fertile terrain for making sense of the present day. However, we need to scrutinize the claims of classical scholars just as we would the claims of any other public figure, and understand the motivations and assumptions that underpin their ideas.

The Digital Humanities: Overhyped or Misunderstood?

An image of the Yale Beinecke Rare Books Library

A recent series of articles appearing in The Chronicle of Higher Education has reopened the discussion about the nature of the digital humanities. Some scholars argue the digital humanities are a boon to humanistic inquiry and some argue they’re a detriment, but all sides seem to agree it’s worth understanding just what the scope and ambition of the digital humanities is and ought to be.

Continue reading “The Digital Humanities: Overhyped or Misunderstood?”

Optimizing the IRB

For the average person, the notion of medical research may conjure dramatic images of lab-coated scientists handing test tubes and analyzing data. What hardly ever comes up, though, is a process some researchers dread: approval by an institutional review board (IRB). Notoriously lengthy and sometimes difficult to navigate, the process is an oft-unseen yet critical piece of conducting research. And, as CNN contributor Robert Klitzman argues, the demands it places in its current form may have become more of a burden on research than anything.

Continue reading “Optimizing the IRB”

Who Owns Knowledge? Examining Open Access Policies

In January of this year, activist and world-renowned computer genius Aaron Swartz took his own life. At the time of his death, Swartz, an instrumental figure in the invention of the RSS feed and the online community Reddit, was facing 35 years in prison and exorbitant fines for downloading millions of articles from MIT’s academic databases.

Swartz was a proponent of the free flow of information through the Internet, and his tragic passing garnered national attention for the issue of open access, which enables academics to publish their work for anyone to use without a paywall or password.

These kinds of policies have been growing throughout many communities, and are of the utmost importance to the world of academia. An increasing number of educational institutions are adopting an open access policy, presenting a challenge to the current system of publication.

Few college students realize that after they graduate, they lose access to the multitude of journals they utilize throughout their college career. Even fewer realize how much this access costs their universities, and that there are so many in the world to whom this information is not available at all.

While universities are considering these policies and adapting them to the ways that are best suited to each individual institution, these new concepts bring to light important questions. How would the free exchange of information shift society’s views and treatment of higher education? How can this type of infrastructure be implemented, both practically from an economic standpoint, as well as philosophically, without diminishing the validity and caliber of academic work? Should intellectual property be a public commodity? Ultimately, who owns knowledge?

These questions have the ability to fundamentally and radically shift the way we educate ourselves as a community. Now is the time to examine how we teach and learn in some of our most formative educational arenas. What are the benefits and drawbacks to open access—can we all profit when discovery is shared, or does open access too drastically alter the educational landscape? And perhaps most important to our campus—what would open access look like at DePauw?

On Monday, April 29th, at 4 PM in the Peeler Auditorium, the Prindle Institute and Roy O. West Library will be hosting a panel to discuss these questions and more. Join Alan Boyd, director of libraries at Oberlin College, Ada Emmett, visiting professor of library sciences at Purdue University and DePauw Professor Kelsey Kauffman, to learn more about the ways in which we create and share knowledge.