The Ethics of Self-Citation
In early 2021, the Swiss Academies of Arts and Sciences (SAAS) published an updated set of standards for academic inquiry; among other things, this new “Code of Conduct for Scientific Integrity” aims to encourage high expectations for academic excellence and to “help build a robust culture of scientific integrity that will stand the test of time.” Notably, whereas the Code’s previous version (published in 2008) treated “academic misconduct” simply as a practice based on spreading deceptive misinformation (either intentionally or due to negligence), the new document expands that definition to include a variety of bad habits in academia.
In addition to falsifying or misrepresenting one’s data — including various forms of plagiarism (one of the most familiar academic sins) — the following is a partial list of practices the SAAS will now also consider “academic misconduct”:
- Failing to adequately consider the expert opinions and theories that make up the current body of knowledge and making incorrect or disparaging statements about divergent opinions and theories;
- Establishing or supporting journals or platforms lacking proper quality standards;
- Unjustified and/or selective citation or self-citation;
- Failing to consider and accept possible harm and risks in connection with research work; and
- Enabling funders and sponsors to influence the independence of the research methodology or the reporting of research findings.
Going forward, if Swiss academics perform or publish research failing to uphold these standards, they might well find themselves sanctioned or otherwise punished.
To some, these guidelines might seem odd: why, for example, would a researcher attempting to write an academic article not “adequately consider the expert opinions and theories that make up the current body of knowledge” on the relevant topic? Put differently: why would someone seek to contribute to “the current body of knowledge” without knowing that body’s shape?
As Katerina Guba, the director of the Center for Institutional Analysis of Science and Education at the European University at St. Petersburg, explains, “Today, scholars have to publish much more than they did to get an academic position. Intense competition leads to cutting ethical corners apart from the three ‘cardinal sins’ of research conduct — falsification, fabrication and plagiarism.” Given the painful state of the academic job market, researchers can easily find incentives to pad their CVs and puff up their resumes in an attempt to save time and make themselves look better than their peers vying for interviews.
So, let’s talk about self-citation.
In general, self-citation is simply the practice of an academic who cites their own work in later publications they produce. Clearly, this is not necessarily ethically problematic: indeed, in many cases, it might well be required for a researcher to cite themselves in order to be clear about the source of their data, the grounding of their argument, the development of the relevant dialectical exchange, or many other potential reasons — and the SAAS recognizes this. Notice that the new Code warns against “unjustified and/or selective citation or self-citation” — so, when is self-citation unjustified and/or unethical?
Suppose that Moe is applying for a job and lists a series of impressive-sounding awards on his resume; when the hiring manager double-checks Moe’s references, she confirms that Moe did indeed receive the awards of which he boasts. But the manager also learns that one of Moe’s responsibilities at his previous job was selecting the winners of the awards in question — that is to say, Moe gave the awards to himself.
The hiring manager might be suspicious of at least two possibilities regarding Moe’s awards:
- It might be the case that Moe didn’t actually deserve the awards and abused his position as “award-giver” to personally profit, or
- It might be the case that Moe could have deserved the awards, but ignored other deserving (potentially more-deserving) candidates for the awards that he gave to himself.
Because citation metrics of publications are now a prized commodity among academics, self-citation practices can raise precisely the same worries. Consider the h-index: a score for a researcher’s publication record determined by a function of their total number of publication credits and how often their publications have been cited in other publications. In short, the h-index claims to offer a handily quantified measurement of how “influential” someone has been on their academic field.
But, as C. Thi Nguyen has pointed out, these sorts of quantifications not only reduce complicated social phenomena (like “influence”) to thinned-out oversimplifications, but they can be gamified or otherwise manipulated by clever agents who know how to play the game in just the right way. Herein lies one of the problems of self-citations: an unscrupulous academic can distort their own h-index scores (and other such metrics) to make them look artificially larger (and more impressive) by intentionally “awarding themselves” with citations just like Moe granted himself awards in Situation #1.
But, perhaps even more problematic than this, self-citations limit the scope of a researcher’s attention when they are purporting to contribute to the wider academic conversation. Suppose that I’m writing an article about some topic and, rather than review the latest literature on the subject, I instead just cite my own articles from several years (or several decades) ago: depending on the topic, it could easily be the case that I am missing important arguments, observations, or data that have been made in the interim period. Just like Moe in Situation #2, I would have ignored other worthy candidates for citation to instead give the attention to myself — and, in this case, the quality of my new article would suffer as a result.
For example, consider a forthcoming article in the Monash Bioethics Review titled “Can ‘Eugenics’ Be Defended?” Co-written by a panel of six authors, many of whom are well-known in their various fields, the 8-page article’s reference list includes a total of 34 citations — 14 of these references (41%) were authored by one or more of the article’s six contributors (and 5 of them are from the lead author, making him the most-cited researcher on the reference list). While the argument of this particular publication is indeed controversial, my present concern is restricted to the article’s form, rather than its contentious content: the exhibited preference to self-cite seems to have led the authors to ignore almost any bioethicists or philosophers of disability who disagree with their (again, extremely controversial) thesis (save for one reference to an interlocutor of this new publication and one citation of a magazine article). While this new piece repeatedly cites questions that Peter Singer (one of the six co-authors) asked in the early 2000s, it fails to cite any philosophers who have spent several decades providing answers to those very questions, thereby reducing the possible value of its purported contributions to the academic discourse. Indeed, self-citation is not the only dysgenic element of this particular publication, but it is one trait that attentive authors should wish to cull from the herd of academic bad habits.
Overall, recent years have seen just such an increased interest among academics about the sociological features of their disciplinary metrics, with several studies and reports being issued about the nature and practice of self-citation (notably, male academics — or at least those without “short, disrupted, or diverse careers” — seem to be far more likely to self-cite, as are those under pressure to meet certain quantified productivity expectations). In response, some have proposed additional metrics to specifically track self-citations, alternate metrics intended to be more balanced, and upending the culture of “curated scorekeeping” altogether. The SAAS’s move to specifically highlight self-citation’s potential as professional malpractice is another attempt to limit self-serving habits that can threaten the credibility of academic claims to knowledge writ large.
Ultimately, much like the increased notice that “p-hacking” has recently received in wider popular culture — and indeed, the similar story we can tell about at least some elements of “fake news” development online — it might be time to have a similarly wide-spread conversation about how people should and should not use citations.