← Return to search results
Back to Prindle Institute

Who Should Get an A?

painting of crowded schoolhouse

The New York Times reports that for every 10 grades assessed to undergraduates at Yale during the last academic year, 8 were either an A or an A minus: corresponding to an increase in average GPA by nearly 0.3 points since the turn of the century, up to 3.7 from 3.42. This comes after similar patterns were uncovered at Harvard in early October, and a series of university professors were fired over their poor grade distributions: including one at Spelman College last month and a high-profile case last year at New York University.

There are many ways in which to understand these popular controversies: perhaps the problem is grade inflation, or students are struggling following the pandemic. Such theories are important to discuss, and significant attention has been devoted to them since the pandemic. However, there is an observation which we might make here, raising questions with implications spanning pre-K through graduate school: disagreements over low test scores and increasingly high grades are often disagreements over the very purpose of education, and the role it plays in our larger society. The question at the heart of the matter is deceivingly simple: who should get an A?

When asked this question, two categories of answers may come to mind. The first, and perhaps most common, is: the students who understand the material exceptionally well. The entire idea of grading on a curve is based on this premise: for any given class, a group of students will understand the material exceptionally well, a group will understand it exceptionally poorly, and most will fall somewhere in the middle. Under this scheme, grading — and, by extension, education — functions to stratify students: it supposedly identifies the best and most deserving individuals. And, by assumption, someone must always be on the opposite end of the spectrum — for someone to be the best, someone else must always be the worst. This idea, for better or worse, has had an incredibly deep impact on how we, as a society, understand both grades and education more broadly. When grades function to stratify, good grades become the instrument of meritocratic advancement up the socioeconomic hierarchy.

The logic here will be familiar to any high school student, having been echoed for years. To get a good job, you need good grades in college, and to get into college, you need good grades in high school; and to get the best grades in high school, you need to do after school tutoring in elementary school, learn to read as early as possible, and so on. When good grades are a primary vehicle for socioeconomic security, education becomes a bloodsport for which training must begin as early as possible. On this view, the awarding of A’s or A-‘s to 80% of students – as Harvard and Yale and others have done –  is an unacceptable obfuscation of who has won; grades no longer function to establish the differentiation which our broader economy relies upon.

But mixed in our social consciousness is another concept of grading, built on a different idea of education. Perhaps the student who should get an A is the student who satisfied, to the fullest extent, the expectations of the course. The key difference between this notion and that described above is that, here, everyone can get an A so long as all students satisfy those expectations. Imagine, for example, you’re teaching a class on accounting, designed to introduce students to basic concepts in Microsoft Excel and prepare them for higher-level coursework which will require a basic set of skills and a common vocabulary. If this is the goal of the course, then there is no reason that every student shouldn’t get an A: if the goal is for students to develop certain skills, then it only matters that the goal is met, and the degree to which those goals are surpassed is superfluous to the purpose of the course. With realistic goals, proper teaching, and appropriate effort, every student will develop those skills, and the course will have fulfilled its educational mission. Under this scheme, grading functions to indicate competency, and education functions to cultivate it; education is not about sorting students, but rather, uplifting them as a group.

This may seem to be a radical idea of education’s purpose, but I’d argue that the idea is more common than one might think. The idea of educational standards, both at the federal and state level, is built on this idea of education: that a graduate of high school, for example, should have certain competencies. It is also why grading entirely on a curve is uncommon — if the best student gets a 98% and the worst gets a 95%, it hardly seems appropriate to award an A to the former and an F to the latter — and, further, why educators are often blamed for their student’s poor grades: we expect professors to teach all students a set of material, not merely succeed in stratifying their students into the best and worst.

Across education, we can see these two ideas of the educational mission — education as stratifying and education as uplifting — coming into conflict with one-another. Perhaps they even co-exist within most grading systems, where a C is intended to indicate competency and an A indicates exceptional understanding. But even though we may be intuitively familiar with both, I think there’s reason to take the conflict between them seriously: I would argue that not only do these two concepts of education conflict, but that they’re fundamentally at odds with one another. If stratifying students requires always failing some, then education cannot simultaneously function to uplift all students; and if uplifting all students requires providing second and third chances, then grades and education cannot play their fundamental role in our society’s larger economic system. This is exactly what has happened in medical education when the first United States Medical Licensing Exam was transitioned from a scored system to a simple pass/fail: when this change was finalized, residency program directors lost their primary metric for deciding which medical students to interview.

But we can also understand this conflict at a different level. Take the perspective of a professor. Very few educators want to be the gatekeepers of socioeconomic privilege, and most find the idea of failing students unpleasant, especially when those students make a genuine effort: most professors want to teach, to uplift their students, share their passion for the subject they have devoted their life to studying. Take the perspective of a student. In a stratifying educational system, students are actively punished for helping their classmates, and are tacitly encouraged to undermine other students to increase their standing in the grading hierarchy; in an uplifting system, no such incentives exist, and collaboration is tacitly encouraged.

Grading controversies are, fundamentally, a debate which happens between these two, radically different ideas about education and the social role it should serve. Should education uplift all, or determine who can go on? Should education be rigorous and challenging, or designed to accommodate the flourishing of students? These are not easy questions, but they are questions which we will continue to face until the contradiction inherent to modern education is resolved.

Coronavirus, College Board, and AP Exams

photograph of scantron exam being filled in with pencil

With the last Advanced Placement (AP) exams finished on May 22nd, it marked the end of the jam packed 2 weeks of AP testing. However, this year was no normal year for AP exams. Due to school closures from the coronavirus pandemic, AP tests could no longer be administered in schools as usual, but were instead taken at home. As tests moved online, AP tests were quickly modified in format to significantly shorten the exam. AP tests are usually quite time consuming, with a full exam lasting around 4 hours, but this year’s AP exams were shortened to just 50 minutes. Although this decision was initially praised by many students and teachers, the newly formatted online tests brought with them a number of problems. From technological issues with submitting answers to poorly formatted test questions and unfair testing environments, various issues with the new AP exam consistently arose throughout the two week testing period. Due to this, College Board is now facing a 500 million dollar lawsuit with claims against “breach of contract, gross negligence, misrepresentation and violations of the Americans With Disabilities Act.”

Of the many issues experienced by students during the AP exam, one glaring problem of the newly formatted AP tests seemed to stand out: the high randomness factor in student’s scores. To understand this, one needs to compare the original AP test to the new ones. The original AP tests consisted of a multiple choice and writing section where the multiple choice section represented a larger percentage of the final score. However, this year, the multiple choice was completely eliminated, leaving students with a significantly shortened writing portion. This created a randomness factor where students could not be tested on the full material of the course but only a small selection of the material. This type of testing can often lead to an unrepresentative score of the student’s knowledge if a student is tested on a concept in which the student is considerably weaker or stronger in. Since a small range of random concepts are tested in a shorter exam, exams could not possibly holistically measure the student’s knowledge of the course material.

A similar thing could also be said for the types of questions given. In the original AP writing sections for many exams, specifically history and English exams, a writing section consists of differently formatted questions. For example, in AP history exams, there is a document-based question, long essay question, and a short answer question. This year, however, only a modified version of the document-based question was given. Not only did exams test a small range of concepts in history out of the entire year’s worth of material, it tested students on the document-based question only, which is largely regarded to be the most difficult part of history exams. Testing only on the basis of the document-based question gives an incomplete assessment of the student’s knowledge of the year’s worth of material given different students strengths; some students do better on different question formats (multiple choice, short answer, long essay question).

To add to the randomness in exams, many exams, specifically STEM exams, were formatted in a multipart question where question 1, for example, has parts A through L. One may think that this multipart question format would be better at testing a wide range of concepts. However, there is a catch, the questions are formatted in a way so that the answers are dependent to the previous part. For example, part D of question 1 would need to use the answer from part C to find the correct answer for part D, and part C would need the answer from part B to find the correct answer for C, and so on. So if a student were to get part B wrong, then it would cause a chain reaction causing the student to miss parts B,C, and D. The student could fully understand the concept for answering C and D, but would get it wrong due to a missed answer on part B. On this year’s AP tests, this type of formatting was pushed to the extreme, where 5 following parts would be dependent on the answer for the primary part. If this were to occur on a regular AP test, a wrong answer on these types of exam questions would have a negative effect, but there would always be multiple writing questions and a large multiple choice section to balance out wrong answers to multipart questions. However, on this shortened exam, a wrong answer could lead to an extremely detrimental effect on the final test score, a score not representative of the student’s actual knowledge of the course material.

So why might all this matter? AP exams determine if a student receives college credit for the course and also plays a role in the college admissions process. In most cases, a score above a 3 or 4 (out of 5) on an AP exam will grant college credit for the course. With high stakes on the line as to whether or not a student will receive credit for a year’s worth of hard work, an exam should be randomness-minimizing and be reflective of student’s knowledge on the subject. However, with the multipart questions and a fraction of the course material tested, the exam this year provided unrepresentative exam scores for students. A student, by the chance of bad luck, could be tested on the one concept in which he or she was weak in, which could lead to an exam score that denies a year’s worth of a student’s hard work.

However, College Board’s poorly formatted exams were only the tip of the iceberg for many students. Other factors of randomness and external factors plagued the AP exams this year.

One significant issue was undoubtedly the widespread technological problems, more specifically, students encountering issues with the process of uploading and submitting exam answers. Many videos of students unable to submit exam responses were posted all over social media. Although College Board reported 1% of students were not able to submit their responses, that amounts to almost 10,000 students unable to submit their final exams. Many students, at no fault of their own, now will have to redo the AP exam in early June. Students now have to face the burden of the College Board’s mismanagement of online servers, a burden in which they had no control over.

On top of this, online AP exams were clearly unable to create a fair testing environment. Any test or exam, especially exams which determine college credit, are at minimum expected to provide a fair testing environment. However, online AP exams failed to meet this standard. Critics have argued that online AP tests disregard the fact that many students may not have access to reliable internet. Many low-income students depend on the educational resources (wifi, books, computers) provided by schools and public institutions like libraries, but without access to those resources many won’t even get a chance to take the test. With so many experiencing economic hardship due to COVID-19,  and the further obstacle of inaccessible public educational resources, AP tests cannot adequately or accurately measure students’ knowledge of the course material. The effects of this are that AP scores play a part in college admissions, so unfair AP test environments could disproportionately affect different groups of students thereby ruining our notion of meritocracy in education.

Furthermore, taking tests at home also comes with many other obstacles to creating a fair test environment due to external distractions such as siblings or even something so simple as the time in which AP exams are set. For example, many American international school students across the world are forced to take tests at inadequate times because a 2pm EST test would be a 3am test for international school students in Japan. However, the biggest factor that contributes to an unfair testing environment is the potential for collaboration on these exams. Students can easily obtain a competitive advantage through cheating without the presence of a proctor. With so many external factors complicating online testing, these online AP tests failed to provide a fair testing environment.

So why then did College Board, despite the clear problems regarding unfair testing environments, shortened test formats, and technological problems, decide to continue the AP test? Why didn’t College Board follow suit of other academic organizations such as international baccalaureate (IB) who cancelled their exams and instead used overall quality of coursework throughout the year to assess whether a student qualifies for credit? The truth is if College Board were to cancel AP exams, they would face pressures to return the money back to students. Considering College Board made over 1 billion dollars in revenue, more than 130 million dollars in profit in 2017, and the president of College Board makes over 1 million dollars a year all despite being a supposed nonprofit, it seems quite clear there are incentives in place other than the well-being of students’ education.

In the end, the purpose of AP tests is to provide a measure and representation of the student’s knowledge of the course material. When an AP test is not able to meet that purpose with this year’s online AP exam format, then the AP scores only serve as a number that cannot possibly measure the student’s knowledge of the class material. Despite this, this obsolete number will be the determinant in a student earning college credit and be a factor in the college admissions process.

College Admissions and the Ethics of Unfair Advantages

A boy walks through an aisle of books in a library.

News broke this month of a college admissions scandal in which it was discovered that wealthy and powerful parents were paying thousands of dollars to have their children admitted to prestigious colleges. The fraud was committed in two ways: in the first, SAT and ACT scores were falsified (generally by having someone else other than the student write the tests), while in the second, profiles portraying students as elite athletes were forged (often with students’ faces being photoshopped onto pre-existing pictures) and used as part of a bribe for admission under athletic scholarship. The primary organizer of the fraud has been arrested and pleaded guilty, while as of writing an increasing number of parents are being sought for prosecution.

Continue reading “College Admissions and the Ethics of Unfair Advantages”

“Minibrains” and the Future of Drug Testing

Image of a scientist swabbing a petri dish.

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


 NPR recently reported on the efforts of scientists who are growing small and “extremely rudimentary versions of an actual human brain” by transforming human skin cells into neural stem cells and letting them grow into structures like those found in the human brain. These tissues are called cerebral organoids but are more popularly known as “minibrains.” While this may all sound like science fiction, their use has already led to new discoveries in the medical sciences.

The impetus for developing cerebral organoids comes from the difficult situation imposed on research into brain diseases. It is difficult to model complex conditions like autism and schizophrenia using the brains of mice and other animals. Yet, there are also obvious ethical obstacles to experimenting on live human subjects. Cerebral organoids provide a way out of this trap because they present models more akin to the human brain. Already, they have led to notable advances. Cerebral organoids were used in research into how the Zika virus disrupts normal brain development. The potential to use cerebral organoids to test future therapies for such conditions as schizophrenia, autism, and Alzheimer’s Disease seems quite promising.

The experimental use of cerebral organoids is still quite new; the first ones were successfully developed in 2013. As such, it is the right time to begin serious reflection on the potential ethical hurdles for research conducted on cerebral organoids. To that end, a group of ethicists, law professors, biologists, and neuroscientists recently published a commentary in Nature on the ethics of minibrains.

The commentary raises many interesting issues. Let us consider just three:

The prospect of conscious cerebral organoids

Thus far, the cerebral organoids experimented upon have been roughly the size of peas. According to the Nature commentary, they lack certain cell types, receive sensory input only in primitive form, and have limited connection between brain regions. Yet, there do not appear to be insurmountable hurdles to advances that will allow us to scale these organoids up into larger and more complex neural structures. As the brain is the seat of consciousness, scaled-up organoids may rise to the level of such sensitivity to external stimuli that it may be proper to ascribe consciousness to them. Conscious organisms sensitive to external stimuli can likely experience negative and positive sensations. Such beings have welfare interests. Whether we had ethical obligations to these organoids prior to the onset of feelings, it would be difficult to deny such obligations to them once they achieve this state. Bioethicists and medical researchers ought to develop principles to govern these obligations. They may be able to model them after our current approaches to research obligations regarding animal test subjects. However, it is likely the biological affinity between cerebral organoids and human beings will require significant departure from the animal test subject model.

Additionally, research into consciousness has not nailed down the neural correlates of consciousness. As such, we may not necessarily know if a particularly advanced cerebral organoid is likely to be conscious. Either we ought to purposefully slow the progress into developing complex cerebral organoids until we understand consciousness better, or we pre-emptively treat organoids as beings deserving moral consideration so that we don’t accidentally mistreat an organoid we incorrectly identify as non-conscious.

Human-animal blurring

Cerebral organoids have also been developed in the brains of other animals. This gives the brain cells a more “physiologically natural” environment. According to the Nature commentary, cerebral organoids have been transplanted into mice and have become vascularized in the process. Such vascularization is an important step in the further development in size and complexity of cerebral organoids.

There appears to be a general aversion to the prospect of transplanting human minibrains into mice. Many perceive the creation of such human-animal hybrids (chimeras) as crossing the inviolable boundary between species.  The transplantation of any cells of one animal, especially those of a human (and even more especially those of the brain cells of a human) may violate this sacred boundary.

An earlier entry on The Prindle Post approached the vexing issues of the creation of human-animal chimeras. It appeared that much of the opposition to chimeras was based in part on an objection to “playing God.” Though some have ridiculed the “playing God” argument as based on “a meaningless, dangerous cliché,” people’s strong intuitions against the blurring of species boundaries ought to influence policies put in place to govern such research. If anything, this will help tamp down a strong public backlash.

Changing definitions of death

Cerebral organoids may also threaten the scientific and legal consensus around defining death as the permanent cessation of organismic functioning and understanding the criterion in humans for this as the cessation of functioning in the whole brain. This consensus itself developed in response to emerging technologies in the 1950’s and 1960’s enabling doctors to maintain the functioning of a person’s cardio-pulmonary system after their brain had ceased functioning. Because of this technological change, the criterion of death could no longer be the stopping of the heart. What if research into cerebral organoids and stem cell biology enables us to restore some functions of the brain to a person already declared brain dead? This undercuts the notion that brain death is permanent and may force us to revisit the consensus on death once again.

Minibrains raise many other ethical issues not considered in this brief post. How should medical researchers obtain consent from the human beings who donate cells that are eventually turned into cerebral organoids? Will cerebral organoids who develop feelings need to be appointed legally empowered guardians to look after their interests? Who is the rightful owner of these minibrains? Let us get in front of these ethical questions before science sets its own path.

Do Terminally Ill Patients Have a “Right to Try” Experimental Drugs?

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In his recent State of the Union speech, President Trump urged Congress to pass legislation to give Americans a “right to try” potentially life-saving experimental drugs. He said, “People who are terminally ill should not have to go from country to country to seek a cure — I want to give them a chance right here at home.  It is time for the Congress to give these wonderful Americans the ‘right to try.’” Though only a brief line in a long speech, the ethical implications of the push to expand access to experimental drugs are worth much more attention.

First, let us be clear on what federal “right to try” legislation would entail. Generally, a new drug must go through several phases of clinical research trials before a pharmaceutical company can successfully apply for approval from the Food and Drug Administration to market the drug for use. Advocates of “right to try” legislation want some terminally ill patients to have access to drugs before they go through this rigorous and often protracted process. Recent legislation in California, for example, protects doctors and hospitals from legal action if they prescribe medicine that has passed phase I of clinical trials, but not yet phase II and phase III. Phase I trials test a drug for its safety on human subjects. Phase II tests drugs for effectiveness. Phase III tests drugs to see if they are better than any available alternative treatments.

Thus, “right to try” is a misnomer. First, these experimental drugs are still expected to meet some safety standards before patients can access them. Second, such legislation would not likely mandate that a pharmaceutical company provides access to their experimental drugs. The company can always deny the patient’s request. Third, these laws do not address cost issues. Insurance plans are unlikely to cover any portion of the costs, and pharmaceutical companies are likely to expect the patient to foot the entire bill.

Ethical debate over “right to try” legislation recapitulates a conflict that regularly occurs in American political debate: to what extent does government intervention to protect public welfare by ensuring that drugs are both safe and effective impede the rightful exercise of a patient’s autonomy to choose for herself what risks she is willing to take? Advocates of expanded “right to try” laws view regulatory obstacles set up by the FDA as patronizing hindrances. Lina Clark, the founder of the patient advocacy group HopeNowforALS, put it this way: “The patient community is saying: ‘We are smart, we’re informed, we feel it is our right to try some of these therapies, because we’re going to die anyway.’” While safety and efficacy regulations for new pharmaceuticals generally protect the public from an industry in which some bad actors may be otherwise motivated to push out untested and unsafe drugs on an uninformed populace, the regulations can also prevent some well-informed patients from taking reasonable risks to save their lives by preventing them from getting access to drugs that may be helpful. Therefore, it is reasonable to carve out certain exceptions from these regulations for terminally ill patients.

On the other hand, medical ethicists worry that terminally ill patients are uniquely vulnerable to the allure of “miracle cures.” Dr. R. Adams Dudley, director of UCSF’s Center for Healthcare Value, argues that “we know some people try to take advantage of our desperation when we’re ill.” Terminally ill patients may be vulnerable to exploitation of their desire to find hope in any possible avenue. Their intense desire to find a miracle cure may prevent them from rationally weighing the costs and benefits of trying an unproven drug. A terminal patient may place too much emphasis on the small possibility that an experimental drug will extend his or her life while ignoring greater possibilities that side effects from these drugs will worsen the quality of the life he or she has left. Unscrupulous pharmaceutical companies who see a market in providing terminally ill patients “miracle cures” may exploit this desire to circumvent the regular FDA process.

The Food and Drug Administration already has “compassionate use” regulations that allow patients with no other treatment options to gain access to experimental drugs that have not yet been approved. The pharmaceutical company still must agree to supply the experimental drug, and the FDA still must approve the patient’s application. According to a recent opinion piece in the San Francisco Chronicle, nearly 99 percent of these requests are granted already. “Right to try” legislation at the federal level would not likely mandate that pharmaceutical companies provide the treatment. Such legislation would likely only remove the FDA review step from the process described above.

Proponents of the current system at the FDA view it as a reasonable compromise between respect for patient autonomy and protections for the public welfare. Terminally ill patients have an avenue to apply for and obtain potentially life-saving drugs, but the FDA review process helps safeguard patients from being exploited due to their vulnerable status. The FDA serves as an outside party that can more dispassionately weigh the costs and benefits of pursuing an experimental treatment, thus providing that important step in the rational decision-making process that might otherwise be unduly influenced by the patient’s hope for a miracle cure.

Questions of Access as Harvard Law Accepts the GRE

Since 1947, the LSAT has been a dark cloud hanging over pre-law students. A student’s LSAT score and GPA have been the main considerations in the law school admissions process for almost 70 years. Law schools have become more and more focused on the mean of their LSAT acceptance scores because it determines their national ranking. Thus, students with low LSAT scores but other qualities may not be admitted to prestigious programs.

Continue reading “Questions of Access as Harvard Law Accepts the GRE”