← Return to search results
Back to Prindle Institute

The Morality of the Arts vs. Science Distinction

image of child architect with hard hat standing in front of sketch of city skyline

If one pursues post-secondary education, is it better to study the arts or focus on the sciences? Given the career opportunities and prestige, it has become a common source of mockery that someone would choose to pursue the arts rather than the sciences. But what makes the arts different from the sciences? Do how and why we make such distinctions have ethical ramifications?

What is the difference between the liberal arts and the sciences? The concept of “the arts” stretches back to antiquity where ‘art’ designated a human skill. These skills were used to make things that are artificial (human made), an artifact. Later, the concept of the liberal arts was used to designate the kind of education required for a free citizen (the term “liberal” designating freedom rather than a political ideology) to take part in civil life. Today, the arts may refer to fine arts (like painting or music) as well as liberal arts such as various humanities (philosophy, history, literature, linguistics, etc.) and social sciences (like sociology, economics, or political science). These are now held in contrast to the STEM fields (science, technology, engineering, and mathematics).

The distinction made between the arts and the sciences takes on a moral character when the conversion drifts towards what kinds of education we think is important for meeting the needs of modern society. The distinction goes beyond merely what governments or universities claim the difference is, for is also a distinction that is made by potential students, parents, taxpayers, employers, and society at large. How does society make that distinction? A quick internet search for the relevant distinctions suggests a tendency to emphasize the objective nature of science and the subjective nature of the arts. Science is about finding truth about the world, whereas the arts focus on finding subjective representations according to cultural and historical influences. The sciences are technical, precise, and quantitative. The arts are qualitative, vague, and focus less on right or wrong answers, and thus are thought to lack the rigor of the sciences.

These kinds of sharp distinctions reinforce the idea that liberal arts are not really worth pursuing, that higher education should be about gaining the skills needed for the workforce and securing high-paying jobs. To add to this, the distinction has been a flashpoint of an ongoing culture war as the large number of liberal arts memes and critical comments on the internet will testify to. The result has been severe cuts in liberal arts education, the elimination of staff and services, and even the elimination of majors. To some this may be progress. If the liberal arts and humanities are subjective, if there is little objective truth to be discovered, then they may not be worth saving.

Justin Stover of the University of Edinburgh, for example, believes that there is no case to be made for the humanities. While defenders of the humanities may argue that they are means of improving and expressing our ideas, that they provide skills that are relevant and transferable to other fields and pursuits, or that they are a search for values, Stover believes that these benefits are hollow. He points out that study in the humanities isn’t necessary for actual artistic expression. While studies in obscure languages or cultures may foster useful skills for careers outside of the academy, these are mere by-products of study and not something that makes a strong case for their study.

In addressing the matter of value, Stover notes,

“’values’ is a hard thing to put in a long diachronic frame because it is not clear that there is any analogous notion in any culture besides our own. Values can hardly be a necessary component of the humanities — much less the central core — if there was no notion of them for most of the humanities’ history […] values might have a lot to do with Golden Age Spanish literature; but what have they to do with historical linguistics?”

Stover suggests alternatively that studies in the humanities fulfills a social function by creating a prestigious class of people who share certain tastes and manners of judgment but that ultimately there is no non question-begging justification for the humanities. He notes, “The humanities do not need to make a case within the university because the humanities are the heart of the university.” One cannot justify the importance of the humanities from outside of the perspective of the humanities.

The moral concern on this issue is less about the morality of defending a liberal arts education compared to a science education, but rather about how we are making the distinction itself. Are we talking about methods? Disciplinary norms? The texts? The teaching? Stover’s argument relies on understanding the humanities as an essentially different thing from the sciences. But are there actually good reasons to make these distinctions? Anyone who has studied logic, linguistics, or economics knows how technical those fields can be. By the same token, several studies of the sciences reveal the importance that aesthetic taste can have not only on individual scientists, but on whole scientific communities. The response of scientific communities to the COVID-19 pandemic — disagreements about treatment protocols, publication concerns about observations of the disease, and so on — reveals that the notion that science is a purely objective affair while the arts are purely more subjective is more of a slogan than a reality.

Values are not a mere “notion” of university professors and academics. While Stover doesn’t clarify what he means by values, I would suggest that values are at the heart of the liberal arts and humanities — a ‘value’ at its core simply denotes what people take to be important and worth pursuing. My morning coffee is important to me, I pursue it, I prize it, it has value. The humanities have always been a matter of addressing the issues that humans consider important. So, the answer to the question of what do values have to do with historical linguistics is “a lot.” Languages change over time to reflect the problems, interests, and desires that humans have; linguistic change is a reflection of what is important, what is valued by a society and why.

But if this is the case, then science and the many STEM fields are not immune from this either. What we choose to focus on in science, technology, and engineering reveals what we care about, what we value (knowledge of climate change, for example, has changed how we value the environment). The notion that the humanities can only aspire to the subjective with only secondary benefits in other areas is a moral failure in thinking. Science is not isolated from society, nor should it be. By the same token, a method and style that focuses on empirical verification and experimentation over subjective elements can improve what the humanities can produce and help us focus on what is important.

In addressing the cross section of human interest and scientific method, philosopher John Dewey notes,

“Science through its physical technological consequences is now determining the relations which human beings, severally and in groups, sustain to one another. If it is incapable of developing moral techniques which will also determine these relations, the split in modern culture goes so deep that not only democracy but all civilized values are doomed.”

The distinction between the arts and the sciences is not essential or absolute, but one of our own creation that reflects our own limited thinking. Any art, just like science, can aspire towards critical, experimental, objectivity of some degree just like any scientific and engineering pursuit should be understood in terms of its role in the larger human project. The more we try to separate them, the more detrimental it will be to both. The problem regarding whether there is a case to be made for the arts disappears once we drop the notion that there is complete separation — the more important and interesting moral problem becomes how we might best improve our methods of inquiry that are vital for both.

The Value of Socialization in College

photograph of college students in class

What initiatives enacted as an emergency response to the pandemic will be permanent and become the new norm? Among the many possible legacies is a change in the perceived value of traditional college learning. Colleges and universities whose campuses are now closed will have a responsibility to execute the transition to distance learning effectively so that they can ensure there is not a substantial dip in the quality of education. But by doing so, will they inadvertently degrade the value of in-person learning?

If these colleges fail to execute the transition online well–and some teachers from the even most prestigious schools are ill-equipped for distance learningthey may reveal the value of face-to-face learning. They may show what is lost when students and professors are not interacting together in a classroom and on a campus. But if the colleges do execute the transition well, they reveal that the value of face-to-face learning may be overrated. Some college students have begun asking themselves, if what is accomplished in the classroom can be accomplished online, why incur high tuition and housing costs to go to school?

A 2015 survey found that nearly half of online college students cited affordability as the reason for enrolling in their program. A surprising 78% of respondents indicated that the academic quality of their online learning experiences were comparable to or better than their classroom experiences. Suppose that the quality of education is comparableeven though there are innumerable benefits to in-person learning, what then can traditional colleges offer that learning from one’s laptop cannot?

Colleges are currently struggling to answer that question. Beyond the growing challenge of the online alternative, many “brick and mortar” institutions were already facing severe financial concerns before the pandemic caused them to close their campuses. An analysis by Forbes last Fall found that the financial well-being of private not-for-profit colleges “has deteriorated and many are in danger of closing or merging.” The pandemic and the resulting transition to distance learning has only made the threat more pronounced and immediate, and less dismissable and abstract.

The transition is a threat to the existence of small liberal arts colleges in particular. “Schools are facing unexpected costs as they try to switch their entire classroom instruction apparatus to online-only,” David Jesse reports in USA Today. “That’s a particular challenge for small liberal arts colleges, whose calling cards are face-to-face relationships between faculty and students.”

To stay afloat, institutions without the luxury of deep pockets and long-standing reputations may need to stress the value of the social component of going to school and living on a campus, i.e. experiences that cannot be had through online learning. Indeed, the challenge for residential and liberal arts colleges will be to quantify the rather intangible and ineffable value of socialization in education.

K-12 education is a primary vehicle of socialization. According to the Department of Education, children in the U.S. spend approximately seven hours a day, 180 days a year in the classroom. There students learn socially-desirable behaviors such as teamwork, following schedules, and engaging with others in a respectful manner. They learn how to participate in society.

The same could be said of the college experience, which typically occurs at a formative time in a student’s life. Young people go to college, where they live in a community of similarly-aged individuals and participate in Greek life, sports, student government, and many other social groups. Some extra- and co-curricular activities, such as Ethics Bowl, provide intellectual stimulation alongside opportunities to socialize.

In the unique setting of a college campus, students are exposed to, live with, and befriend people from different regions, with different customs and worldviews. Ideally, this experience teaches them how to interact with other members of society after graduation. Former Ivy League professor Louis Menand argues that college “takes people with disparate backgrounds and beliefs and brings them into line with mainstream norms of reason and taste.” In short, it transforms these very different individuals into a unique class.

Going to college also thrusts most students into a new world of independence, characterized by a constant flurry of adult decisions and responsibilities. For the first time in many of their lives, students must set their own schedules, manage their own finances, and learn how to navigate relationships without the comfort of being under their parents’ roof. They are given a test run of their adult life. As Anne Rondeau, president of College of DuPage, writes, “students are in an environment that challenges them to make important decisions every day.” And those decisions and challenges are not confined solely to their academic pursuits.

Residential and liberal arts colleges are as much an academic experience as they are a social one. While the process of socialization could be achieved elsewherein one’s job or active participation in one’s communityit certainly could not occur via distance learning. The nature of distance learning hampers the students’ ability to socialize, limiting everything from spontaneous, casual conversation with professors to opportunities to forge lasting relationships with other students. And in order to succeed, those struggling traditional institutions may need to highlight just that. They need to be equipped to answer the questions that many prospective students and their parents are asking right now: What is the value of your institution’s social experience? And is it worth the cost you are asking?

If those institutions fail to effectively convince families of the value of their social experience and justify the tuition and housing cost, their end may be near. Without residential colleges, prospective students lose one of the most important and successful means of learning how to be members of society.  Of course, that assumes those students will still feel comfortable sharing close spaces on a campus after months of social distancing.

The Moral Case for University Closure

photograph of gate to school with "SCHOOL CLOSED" sign

When it became clear that DePauw University was considering cancelling in-class sessions and having students move out of their living units, I began thinking through a number of reasons why this was something that places of higher education should seriously consider. Collectively these reasons make a strong case for thinking that colleges and universities have a moral duty to take measures to mitigate the spread of coronavirus, and furthermore, that places of higher education might have added responsibilities.

The first set of reasons is more of a response to objections that students and parents might have for thinking that cancelling in-person classes is a bad idea. I’ll address those first. Then I will offer four reasons for thinking that colleges have more responsibility for mitigating the spread of coronavirus than other individuals or institutions.

Things Students and Parents Might Be Thinking:

  1. This won’t be bad for young students
    A common attitude that a student or parent might have is that college students either won’t get coronavirus or if they do it won’t be that bad for them. The first attitude is patently false. College students around the country have tested positive for COVID19 and many more will. The second attitude has a ring of truth to it. Most college students will probably not have severe conditions, but some will. Many young Americans have compromised immune systems due to heart conditions, diabetes, respiratory conditions, and cancers. And the death rates are much higher for coronavirus than seasonal flu.  Top US health officials say that this virus is 10 times more lethal than seasonal flu. The confirmed case death rate for the flu is about 0.1% while the confirmed case death rate for the coronavirus is about 3.4%.
  2. We’re all going to get it anyway, so why the drastic measures?
    I’ve heard some people say that it’s inevitable that everyone (or almost everyone)  is going to get the virus, and so it doesn’t make sense to take drastic measures to stop the spread of the virus. However, whether it’s inevitable or not, there is still significant value in slowing the progression of this disease. Imagine if you owned a restaurant and you were guaranteed to have 1,000,000 customers place an order, but you didn’t know when they were coming. You don’t want them to come all at once or within a few days of each other. You wouldn’t have enough servers or tables to handle them all at once. You would likely run out of food and supplies. It would be better to have those customers spread out over 12 months. COVID19 is like that.Some healthcare professionals refer to this as “Flattening the Curve.” As this article explains, we are much better off having people get this at different times. It makes it more likely that there will be beds and healthcare workers for those who need it most. It gives the healthcare service industry time to scale up production of vital resources to mitigate the effects of the disease and save more lives. Optimism is high there will be a vaccine, but it could be at least a year before we have a viable vaccine. Slowing the disease buys us time so that it may in fact not be inevitable that everyone gets this.

Why Universities and Colleges Have Extra Moral Reasons to Slow Progression of COVID19
There are good reasons for everyone to take steps to help slow progression of COVID19, but colleges might have an even stronger moral duty to do this work.

  1. Higher Education Structure Spreads Disease
    Here is a plausible moral principle: If you are causally responsible for a harm (or potential harm) you have an extra moral reason to take steps to prevent that harm. Universities and colleges are in this position with respect to the coronavirus. As this article explains, the things that make colleges wonderful also make them an exceptionally good breeding ground for pathogens. Universities and colleges are very social institutions. We encourage students to live on campus in close quarters. They attend several different classes a week. When you count the number of classes, co-curriculars, athletics, and social groups (such as fraternity and sorority friends), the average college student is in close contact with hundreds of people every day. Add to this that colleges are global institutions that send faculty and students abroad, and it’s clear that every college or university is a potential hotspot for an outbreak in ways that a lot of other organizations and businesses are not. That puts a greater moral burden on higher education institutions to act.
  2. Vulnerable Groups at Universities and Colleges
    The situation could be even worse at a college or university because the average retirement age for professors tends to be higher and universities often have a vibrant and active community of retired emeritus professors in their midst. This means that there is more at stake locally for your typical college or university. On the plausible assumption that employers have a responsibility to care for the well-being of those they employ, colleges have extra reasons to be concerned about slowing the spread of the virus.
  3. Fundamental Mission to Sustain Democracy
    It is sometimes forgotten that one of the fundamental aims of colleges and universities is to strengthen and sustain a legitimate and flourishing democracy. The Jeffersonian idea is that we need all of these colleges and universities to provide citizens with the knowledge, skills, and capacities to be good, democratic citizens. Our fundamental mission is not to educate students; educating students is simply how colleges and universities think they can best fulfill the fundamental mission of preserving our democracy. To that end, anything that is a potential threat to democracy should be of grave concern to any college or university and sometimes colleges and universities should be called upon to temporarily suspend the usual ways in which we preserve our democracy, especially if business as usual poses a different sort of potential threat.  And, yes, pandemics are a significant threat to democracy.
  4. Colleges in Small Communities
    This last moral reason applies to colleges and universities in small, rural communities. Colleges and universities have responsibilities to the communities that they operate within. The degree of responsibility is proportional to how much damage the college is uniquely capable of inflicting on the community. When a college is part of a small community with few other large organizations, they bear a greater share of the moral responsibility to limit the ways in which it might cause harm via disease spread. A place like DePauw is the biggest risk factor for Greencastle having an outbreak, and so places like DePauw have extra reasons to consider closing down.

The decision to close a college is disruptive for so many people, and I get the sense that many students think that these are arbitrary and capricious decisions that couldn’t possibly be motivated by sound moral reasoning. Whatever you decide about the wisdom of closing colleges, I hope you do so with the understanding that there are several significant morally relevant considerations that give college administrators and boards legitimate moral reasons to join the fight to slow the progression of this virus.


UPDATE: (3-15-2020)

The Likelihood of College Students Spreading the Virus Without Symptoms
Since this piece was published, it has come to light that people who have the virus but do not have symptoms are playing an even bigger role in the spread of the virus than we previously thought. It’s also coming to light that people who spread the disease without symptoms tend to be 20 years old and younger. That gives colleges and universities even more reason to consider closing, since college age students are likely to contribute to the spread in ways that make mitigation extremely difficult.

University Divestment from Fossil Fuels

photograph of campus building at McGill University

This month, tenured McGill University Philosophy professor Gregory Mikkelson resigned from his position. Mikkelson explained that he could no longer work for an institution that professes a commitment to a reduction to its carbon footprint, all the while continuing to invest in fossil fuels. Mikkelson argued further that the university board’s continued refusal to divest from fossil fuels is in opposition to the democratic mandate in favor of divestment that has developed across the campus.

Mikkelson’s actions make a powerful statement in a general academic climate in which divestment from fossil fuels has strong support among faculty and students. Some universities have taken action in response. In September 2019, the University of California system announced that they would be cutting fossil fuels from their over $80 billion dollar investment portfolio, citing financial risk as a major motivating factor. The University of California system is the largest educational system in the country, so this move sets an important precedent for other universities under pressure to do the same thing.

Many prominent schools across the country are resisting pressure to divest. On January 3rd, students of Harvard and Yale Universities staged a protest of their respective universities’ continued support for the fossil fuel industry by storming the field of the annual football game between Harvard and Yale, delaying the game by almost an hour. This is only one such protest; there have been many others over a span of almost a decade. Students, faculty members, and staff have occupied the offices of administrators, held sit-ins, and conducted rallies.

Those who wish to defend continued investment in the fossil fuel industry make the argument that universities have a fiduciary obligation to students, faculty, and staff. As a result, they need to maintain the most promising investment portfolio possible. They need financial security in order to continue to provide a thriving learning environment. This involves investing in the market that actually exists rather than an idealized market that doesn’t. A portfolio that includes diversified investments in sustainable renewable sources of energy would be ideal, but many think that the current political climate provides little evidence that this approach would be a wise investment strategy. President Trump can be relied upon to thwart the advance of renewable energy at every turn. At this point, it is unclear how many more years universities will need to make investment decisions that take into account the political realities of living under this administration. Those who make this argument contend that the primary obligation of a university—first and foremost—is to provide education to students. Universities can fulfill this obligation if and only if they are financially secure.

Relatedly, some argue that, in keeping with universities’ general fiduciary responsibilities, institutions should avoid making investment decisions that are overly political. Investments that look like political statements could deter future donors, which would limit the potential services the university could provide. In response to this argument, critics are quick to point out that continued investment in fossil fuels is a political statement. Crucially, it is a political statement with which the heart and soul of the university—faculty, staff, and students—tend to strenuously disagree.

Those who want to defend continued investment in the fossil fuel industry argue further that investors are in a better position to change the behavior of fossil fuel companies because they have voting powers on crucial issues. Shareholders are in a position to vote directors and even entire boards out of their jobs if they do not acknowledge and take meaningful action on climate change. Shareholders are in a position to force transparency when it comes to publishing substantive emissions data. When fossil fuel industries are forced to acknowledge the threat that they pose, they may lead the transition to renewables from within.

Many critics are dubious about the authenticity of this proposal. Even if we take it at face value, we don’t have much reason to believe that this approach is motivating the fossil fuel industry at anything approaching the rate we would need to see in order to achieve the necessary change in the right timeframe. To ward off, or, at the very least, minimize, the threat posed by climate change, we need to take significant meaningful action now, rather than waiting the indeterminate amount of time it might take for the fossil fuel industry to make internal changes that seem to be decidedly against their own interests.

Many disagree with the claim that continued investment in fossil fuels provides a university with financial security. In fact, the entire University of California system disagrees. The reasons the UC system offered for their decision to divest were financial rather than ethical. Their argument is that abandoning investment in fossil fuels now in favor of developing a portfolio of sustainable renewable resources cuts their losses later and is consistent with the inevitable green path forward. It simply isn’t possible to continue in the direction we’re headed. We will inevitably change course.

When academic institutions refuse to divest, faculty and students are put in an uncomfortable position—it is difficult for a person who is concerned about climate change to continue in their role at such an institution while avoiding the charge of personal hypocrisy. Students work hard to earn their spots at universities, and they pay dearly for them. The academic job market is notoriously competitive, and professor positions are extremely hard to come by. Many find Mikkelson’s actions admirable, but recognize that they are not in a position to follow in his footsteps.

Divestment sends a powerful message—institutions of higher education will no longer provide financial support to industries that contribute to climate change. The very nature and mission of universities cast such institutions in pivotal roles to usher in a new, healthier, greener future. Far from shying away from this role, universities should embrace it as a natural fit—after all, they ideally prepare young citizens to design, and thrive in, a promising future. Mikkelson recognized that refusal on the part of higher education to divest from fossil fuels is hypocrisy on the part of the university itself—it is antithetical to the goals of excellence in innovation, empathy and compassion toward our fellow living beings and respect for the ecosystems in which we live, as well as clear, rigorous critical thinking that includes the ability to give appropriate weight to supporting evidence.

What’s more, fossil fuel companies have intentionally obfuscated the facts when it comes to the harms posed by climate change. This practice of putting significant roadblocks in the pathway to knowledge about critical issues is not consistent with the pursuit of knowledge that characterizes a college or university. If an academic institution is to act with integrity, it should not continue to support campaigns of misinformation, especially when the stakes are so high.

College Admissions and the Ethics of Unfair Advantages

A boy walks through an aisle of books in a library.

News broke this month of a college admissions scandal in which it was discovered that wealthy and powerful parents were paying thousands of dollars to have their children admitted to prestigious colleges. The fraud was committed in two ways: in the first, SAT and ACT scores were falsified (generally by having someone else other than the student write the tests), while in the second, profiles portraying students as elite athletes were forged (often with students’ faces being photoshopped onto pre-existing pictures) and used as part of a bribe for admission under athletic scholarship. The primary organizer of the fraud has been arrested and pleaded guilty, while as of writing an increasing number of parents are being sought for prosecution.

Continue reading “College Admissions and the Ethics of Unfair Advantages”

The Ethics of Legacy Admissions

Photograph of a banner that says "office of admissions"

In a statement to the Senate Judiciary Committee, Judge Brett Kavanaugh responded to questions about his college-aged behavior by saying “I got into Yale Law School. That’s the number one law school in the country. I had no connections there. I got there by busting my tail in college.” Although Kavanaugh’s claim to independence was quickly confirmed to be false (his grandfather also attended Yale), his concern to separate himself from the notion of being a “legacy admission” is unsurprising: the very concept grates against the oft-espoused cultural virtue of pulling yourself up by your own bootstraps.

In short, for college admission offices who give preference to such applicants, “legacy” students have at least one close relative (such as a parent, sibling, or grandparent) who is an alumnus of the college in question. Being such an applicant grants no guarantee of admission, but it is a standard question on the applications to many elite institutions, particularly in the Ivy League. Universities interested in such information defend the practice on the grounds that it helps improve a college’s yield, or percentage of admitted students who actually end up enrolling, if more admitted students already have emotional connections to the institution. Critics of the practice charge that it inherently prefers traditionally privileged students at the expense of potentially equally qualified applicants from poorer families.

One element cannot be debated: legacy admission rates are indeed higher than that of others students, often to the degree of two to five times more than first-generation applicants; nearly a third of Harvard’s incoming class of 2021, for example, counted themselves as legacy students. Many schools do not report statistics on the effect of legacy applications to the admissions process, but recent student movements at several historic universities are pushing to change such policies. At the very least, they argue, if the natural assumption of collegiate meritocracy is going to be undermined by nepotistic interests, then such effects should be made public.

Although it may be true that legacy applicants are more likely to attend a school to which they apply, it is clearly not the case that legacy students are the only applicants with emotional investments in the schools of their choice. If yield rates are a driving concern in perpetuating this practice, that could easily be addressed during applicant interviews (or even with other written questions on the initial form). In all likelihood, alumni donation rates are also a significant factor in this process, with schools admitting the children of wealthy graduates (affectionately called “development cases”) in the hopes to continue encouraging financial support; although perhaps falling short of the legal definition of ‘bribery,’ such financial concerns certainly smell morally problematic.

Of course, in most cases, legacy admits are perfectly qualified candidates for admission on their own merits; it’s hard to tell what sort of benefits are granted to the Ivy League graduate, for example, because of her degree as opposed to her privileged upbringing or, indeed, because of her own talent and effort. No one has suggested that legacy students are inherently undeserving of their positions on the student rolls or that Brett Kavanaugh is not a qualified candidate for a Supreme Court position because of the conditions of his schooling; the concern is rather that granting preferential treatment to legacy students unfairly perpetuates socioeconomic disparities for reasons that are not clearly beneficial to anyone other than the colleges themselves.

No Quick Study: The Ethics of Cognitive Enhancing Study Drugs

In July 2018 the journal Nature reported that the use of cognitive enhancing drugs – or CEDs – have steadily been on the rise. Colloquially referred to as “smart drugs” or “study drugs” due to their ability to enhance memory and concentration, they are properly classified as nootropics, a class of drug that contains popular CEDs like Adderall and Ritalin. While these drugs are common and often effective treatments for ADHD, university students are more and more frequently using them illicitly. There is little wonder why: university can often be exhausting, competitive, and stressful, so it is unsurprising that students would seek out a boost in cognitive power when dealing with that upcoming lengthy assignment, or studying for that difficult exam.

Nature also reports that while in the past students may have predominantly relied on the prescriptions of their friends to acquire such drugs, it is becoming significantly easier to buy them online. Any quick Google search will confirm this fact. When researching CEDs for this article, for instance, I came across many advertisements that promised cheap and effective nootropic drugs, delivered to my door in a timely fashion. For example, “Mind Lab Pro” was featured prominently at the top of my search results, a product which brands itself as the “first universal nootropic” and is “formulated to promote a healthy, peak-performing mindstate [sic] known as 100% Brainpower™.” What more could you want?

If you think that all sounds too good to be true, you might be right. As Nature reports,

Debate continues over whether the non-medical uses of prescription drugs boost brain performance. Data suggest that some people benefit from certain drugs in specific situations – for example, surgeons using modafinil – but larger population-wide studies report lesser gains, and conflicting results.

In addition to concerns about the efficacy and safety of using drugs that weren’t specifically prescribed for you, many have raised ethical concerns surrounding student use of CEDs. Putting the potential health risks aside, we can ask: is it ethically suspect to use CEDs as a student?

As is the case with many real-life moral questions, there are a number of arguments on both sides of the issue. The first concern is that using cognitive enhancing drugs to get better grades on exams and assignments constitutes a form of academic dishonesty. Some have compared the used of CEDs to the use of PEDs – namely, performance enhancing drugs – in sports: just as the use of steroids is often considered a form of cheating at sports, so, too, might we think that the use of CEDs should be considered a form of cheating at school. This is certainly how the The President’s Council on Bioethics consider the use of CEDs, which called academic accomplishments aided by the use of CEDs “cheating” at worst, and “cheap” at best.

If the use of CEDs is, in fact, cheating, then ethical considerations would certainly speak against their use. However, critics of this argument often point out that there are many popular forms of cognitive enhancers, the use of which is not considered cheating. Caffeine, for example, has noticeable cognitive benefits, but drinking a cup of coffee is not considered a form of cheating, even if being caffeinated played a crucial role in one’s academic accomplishments. Why, then, should it be any different for CEDs?

There are a couple of reasons why we might think that the use of CEDs is more morally problematic than the use of more widely accepted stimulants like caffeine. First of all, we might think that while anyone can buy coffee practically anywhere, access to CEDs is much more restricted. It might then be unfair to use CEDs: we might think that it is morally suspect to take advantage of a drug that can improve one’s cognitive performance if not everyone has the same kind of access to the drug.

However, it is undoubtedly the case that students have different levels of access to things that enhance their cognitive performance in ways that are generally not seen as problematic. For instance, the fact that I may not have to work during the school year, but you do, will no doubt put me at a cognitive advantage, given that I have more time to study, sleep, and relax. But it is not obvious that I am doing anything morally problematic by using those advantages.

A more general ethical concern with using CEDs is that using them creates an uneven playing field. As we’ve seen, one way that the playing field can become uneven is due to differences in access. The other way is the simple difference in cognitive abilities that result from the use of CEDs: it seems unfair that some people should get a boost in abilities that others do not. Again, we can see how the use of CEDs might be compared to the use of PEDs: it doesn’t seem fair that I should have to compete with someone who is using steroids in order to a get a spot on the team.

But again, there is reason to think that this kind of unfairness is not necessarily morally problematic. Here’s an argument as to why: the fact is that students are not on an even playing field with regards to their abilities regardless of their use of cognitive enhancing drugs, so the mere fact that CEDs may contribute to an uneven playing field is not in itself good enough reason to think that we shouldn’t use them. For example, say that we are both studying for an exam for a class that is built around memorization of facts from a textbook, and that your memory is significantly better than mine. We are clearly playing on an uneven field, but there is nothing morally problematic about you using the superior abilities that you have. Again, this is just one example out of potentially many: cognitive abilities amongst students may vary significantly, but this kind of unevenness does not seem to entail any particular ethical concerns.

Finally, one might worry that successes aided by the use of CEDs do not constitute genuine academic achievements, either because of the above concerns about cheating or using an unfair advantage, or because such accomplishments are not truly due to the abilities of the students themselves. We might think that in order for one’s accomplishments to have value, or to contribute to the strength of one’s character, that they should be solely the product of the individual, and not the individual on drugs. For example, consider a runner who wins a race, but only because they had a particularly strong gust of wind behind them the entire time. We would probably diminish their accomplishment somewhat, because we might think it wasn’t really them that was fully responsible for winning. Similarly, we might think that the continuous reliance on CEDs is a sign of poor character: we would not think that a runner who only ran races with a strong tailwind were particularly virtuous runners.

There is one more general ethical concern about the prevalence of CED use, namely that widespread use risks establishing a new status quo. As Nicole Vincent and Emma Jane at The Conversation argue, with increased CED use and acceptance we might create a future in which such use becomes expected – i.e. that the nature of certain types of employment will become such that they can only be performed in a satisfactory way if one uses cognitive enhancers – but also that one might be held responsible for failing to use CEDs when doing so would improve their results (Vincent and Jane provide an example of a surgeon whose focus could be enhanced by used of CEDs, and could be found negligent if they choose not to use them).

Proponents of CED use, however, might not find such consequences to be particularly troubling. Indeed, just as coffee consumption has become commonplace and accepted, so too might the use of CEDs. And if there’s nothing wrong with having a morning cup of coffee, then what’s wrong with having your morning Ritalin?

The debate over the ethical implications of CEDs, then, is messy. Regardless of the kinds of arguments you find most convincing, though, it seems clear that, in addition to general health concerns about using drugs that weren’t prescribed to you, that there are significant ethical concerns that one needs to consider before attempting to achieve anything approaching “100% Brainpower”.

Why Give $75 Million to Philosophy?

Image of Johns Hopkins University's Main Campus

When Bill Miller, a wealthy businessman, recently made a $75 million donation to the philosophy profession—specifically, to the Johns Hopkins philosophy department—philosophers rejoiced in unison, right? Not exactly. Some rejoiced while others engaged in a debate. Mike Huemer, a philosopher at the University of Colorado, kicked it off in a Facebook post, which was reposted at the What’s Wrong? blog.

Continue reading “Why Give $75 Million to Philosophy?”

Genetic Research in the Navajo Nation

A photo of Native Americans marching along a highway with flags.

In 2002, the Navajo Nation placed a moratorium on genetic research within its territorial jurisdiction.  Among the motivations were concerns about the misuse of data and the potential for privacy violations.  Many members of the Navajo Nation were opposed to the moratorium, primarily because of the medical benefits of genetic testing.  This month, the Navajo Nation announced that they are considering lifting the moratorium.

Continue reading “Genetic Research in the Navajo Nation”