← Return to search results
+

Barry Lam: The Case for Discretion

Modern life runs on rules—some helpful, some… not so much. In this episode, Alex talks with philosopher Barry Lam (UC Riverside) about his new book Fewer Rules, Better People: The Case for Discretion. Lam makes a bold claim: sometimes fewer rules can actually make us better. With sharp insights and real-world stories, he walks us through why giving people more wiggle room might lead to fairer, more humane outcomes. But it’s not all freewheeling freedom—the conversation also digs into the risks of bias and the need for accountability when discretion takes the wheel.

 

ABOUT THE GUEST

Barry Lam is a graduate of the University of California at Irvine and former program director and general manager at KUCI 88.9FM. After earning his Ph.D. in Philosophy from Princeton University, Barry was Associate Professor of Philosophy at Vassar College for 16 years, and has recently moved to UC Riverside as Professor of Philosophy. His podcast, Hi-Phi Nation has received critical acclaim from The Guardian, The Huffington Post, and IndieWire, among other venues.

 

GET THE BOOK

Library Search  →

Amazon  →

ThriftBooks  →

 

FOR FURTHER READING

Barry Lam, “Addicted to rules: How to slay the bureaucratic beast, from SF to DC,” The San Fransisco Standard

J.D. Mathes, “Antidote to criminal system? ‘Fewer rules, better people,'” UC Riverside News

Evan Sellinger, “We Might Be Better Off with Fewer Rules,” The Boston Globe

Nick Kreuder, “Be On Your Best Behavior: On Lifetime Judiciary Appointments,” The Prindle Post

David Millar, “Do Police Intentions Matter?,” The Prindle Post

 

LISTENER EXPERIENCE SURVEY

As we continue to build on the show’s successes, we want to hear from you—our listeners. We’ve just launched a quick survey to better understand your experience with the podcast, and we’d love your input on the format, our conversations with guests, and more. As a thank you for your time and insights, we’re giving away three exclusive Examining Ethics swag bags to randomly selected participants. Winners of swag bags will be notified after the survey closes. Thanks for your participation!

Complete the Survey →

Robert Talisse: Civic Solitude

Our 2024-2025 season continues with a conversation with Robert Talisse (Vanderbilt) on his new book, Civic Solitude: Why Democracy Needs Distance. Here, Talisse argues that democracy, in addition to its participatory elements, requires a kind of reflection and capacity building that is best achieved in solitude. He recommends that we rebuild and/or expand public spaces for such development as a potential antidote to some of our own democratic political ills.

 

ABOUT THE GUEST

Robert Talisse is W. Alton Jones Professor of Philosophy and Professor of Political Science at Vanderbilt University in Nashville, Tennessee. A native of New Jersey, Talisse earned his PhD in Philosophy at the City University of New York Graduate School in 2001. His research focuses on democracy. Specifically, Talisse writes about how a democratic political order can assist and complicate our efforts to acquire knowledge, share ideas, understand what is of value, and address our disagreements. He engages questions about public discourse, popular political ignorance, partisan polarization, and the ethics of citizenship.

 

GET THE BOOK

Library Search  →

Amazon  →

ThriftBooks  →

 

FOR FURTHER READING

Robert Talisse, “Overdoing Democracy: Why We Must Put Politics In Its Place” (Oxford University Press, 2019)

Robert Talisse, “Stop and think: An Undervalued Approach In a World That Short-Circuits Thoughtful Political Judgment,” The Conversation

Lilliana Mason, “Uncivil Agreement: How Politics Became Our Identity” (University of Chicago Press, 2018)

Marta Nunes Da Costa, “Due Attention: Addictive Tech, the Stunted Self, and Our Shrinking World,” The Prindle Post

Evan Arnet, “When Does a Democracy Die?,” The Prindle Post

Wes Siscoe, “The Empathetic Democracy: Countering Polarization with Considerate Civic Discourse,” The Prindle Post

 

Be On Your Best Behavior: On Lifetime Judiciary Appointments

In my previous column, I considered President Joe Biden’s arguments for limiting the terms of U.S. Supreme Court justices. Of the three arguments, only the view that no single presidential term should have disproportionate impact on the Court showed immediate promise. Although, after unpacking the likely justification of this view, I argued that accepting it may require viewing the Supreme Court as a political institution. This result should give us pause as a politically neutral high court is a desirable thing.

Lifetime tenure is the status quo in the U.S. But this mere fact does not automatically make it desirable. Humans tend to favor the current state of affairs, even if there is an advantage to departing from it. So, we ought to consider arguments which can be marshaled in favor of indefinite terms for Supreme Court justices. If these arguments are not compelling, then perhaps the arguments for term limits will gain appeal, simply by virtue of being a better, albeit potentially flawed, option.

The most influential arguments for lifetime judicial appointments in the U.S. come from Alexander Hamilton in Federalist 78, first published in 1788. This paper is part of a collection now commonly called the Federalist Papers, a series of essays in which Hamilton, James Madison, and John Jay sought to offer public arguments defending the various features of the then new United States Constitution, hoping to help ensure its ratification. Each essay offers some defense of a particular aspect of the newly proposed federal government.

In Federalist 78, Hamilton defends the structure of the judiciary. Among those features he wishes to defend, Hamilton argues in favor of lifetime appointments for the justices, using the language of “hold[ing] their offices during good behavior” adopted in Article III of the Constitution to describe this tenure.

He offers three major arguments to defend the judiciary. First, the Supreme Court is the least dangerous of the three branches of government. Second, few people in society will acquire the knowledge and skills necessary for a successful tenure. Third, that lifetime appointments are necessary to secure the independence of the judiciary.

In describing the power of the Supreme Court, Hamilton writes that “The judiciary… has no influence over either the sword or the purse; no direction either of the strength or of the wealth of society; and can take no active resolution whatever. It may be truly said to have neither FORCE nor WILL, but merely judgment.” So, Hamilton’s position is that the Supreme Court lacks power. In his view, all it can do is interpret the Constitution. Further, it has no ability to enforce the decisions that it reaches. The Supreme Court effectively requires compliance from the executive and legislative branches. As Andrew Jackson is often and perhaps apocryphally quoted, “John Marshall has made his decision, now let him enforce it.”

Ultimately, Hamilton thinks that we as citizens have little to worry about from the judiciary. He claims that “though individual oppression may now and then proceed from the courts of justice, the general liberty of the people can never be endangered from that quarter… so long as the judiciary remains truly distinct from both the legislature and the Executive.”

Of course, things have changed significantly since the late 18th century. All branches of the federal government, and the United States as a whole, have much greater power than the founders anticipated. In fact, just 15 years after Hamilton’s death, the Supreme Court ruled that the federal government has powers not explicitly granted to it by the Constitution.

Additionally, the Supreme Court has often reshaped the lives and rights of Americans simply by interpreting the Constitution. Since just the mid-20th century, the Supreme Court ended racial segregation, guaranteed the right to a defense attorney for those who cannot afford one, declared laws against interracial and same-sex marriagesunconstitutional, established a Constitutional right to privacy, then used that right to conclude access to birth control and abortion are Constitutional rights, as well as that criminalizing sexual acts between consenting adults is unconstitutional. They have since walked back at least some of these decisions and may do the same for others.

The Supreme Court certainly lacks the power of the executive and the legislative. They cannot make the law and they cannot enforce it. However, their ability to interpret both individual laws and rights guaranteed by the Constitution has the potential to greatly impact the lives of millions, if not all, Americans. Thus, Hamilton’s argument misses the mark in the 21st century. Although arguably not the most powerful branch, the Supreme Court may reshape features of our lives.

Second, Hamilton notes that there are few who can meet the qualifications of being a justice. He writes that “a voluminous code of laws is one of the inconveniences necessarily connected with the advantages of a free government. To avoid an arbitrary discretion in the courts, it is indispensable that they should be bound.”  Further, Hamilton notes that “the records of those precedents must unavoidably swell to very considerable bulk and must demand long and laborious study to acquire a competent knowledge of them.” So, Hamilton thinks that few can both develop the skill and character necessary to serve as a justice. There’s simply a lot to learn.

One might, however, wonder if Hamilton’s views here may be a product of the era in which he lived. As described by Jay Alexander, 18th century lawyers typically began practicing law by apprenticing. However, their mentors frequently traveled for trials, leaving apprentices to simply read legal books for weeks or months at a time without any guidance. Under these conditions it is no surprise that it was incredibly difficult to gather the knowledge to practice law, let alone become highly proficient in it. Yet things are different in the 21st century. According to the American Bar Association, there are more than 1.3 million practicing lawyers in the United States as of 2023. For reference, the U.S. population according to the 1800 census was 5.3 million. Of course, most Supreme Court justices are typically justices prior. But many simply had prior legal expertise and worked in careers other than judging preceding their appointments.

It is true that justices ought to be individuals of great knowledge, experience, and character. Yet given the sheer number of individuals with legal training, it is not clear that there are only nine people capable of serving as a justice on the highest court at any moment. So, this does not seem to offer compelling reason for why justices ought to have lifetime appointments. Of course, the U.S. legal code has gotten significantly more complicated since the 18th century. But we are significantly better at training, and produce significantly more, people capable of practicing law.

Finally, Hamilton argues that lifetime appointments judiciary appointments are necessary to preserve the independence of the court, which is required to fulfill their function. He writes “that inflexible and uniform adherence to the rights of the Constitution, and of individuals, which we perceive to be indispensable in the courts of justice, can certainly not be expected from judges who hold their offices by a temporary commission.”

His rationale is this. Suppose that judges serve limited terms. Presumably, they could be reappointed to the bench. If this is the case, it may bias the judgments of those judges. Rather than making rulings which best fit current law, justices may instead rule in ways which please the executive and legislative branches; the executive branch nominates the justices, and the legislative branch confirms them, so making rulings to please both may go some way towards securing future appointments for justices. Additionally, when discussing the limited power of the judiciary, Hamilton does note that a judiciary aligned with another branch of government creates a great potential for despotism; this has borne out in the 20th century as authoritarian regimes have sought to push out justices appointed by prior leaders and stack the court with loyalists.

Further, we can extend Hamilton’s argument. He focused primarily on justices’ relationships to other branches of government. However, even if justices can serve only one term, they may still have concerns about their future off the bench. Perhaps a concern about their future could lead them to favor corporate or moneyed interests in their final years. After all, a corporate plaintiff or defendant could need a well-compensated legal consult in the near future. We might find this  possibility especially troubling since the Court recently ruled that federal anti-bribery laws do not cover gratuities, the gifts or other forms of showing appreciation given after a public official has taken an action.

This argument seems powerful on its face. We should certainly worry about justices focusing on their future careers rather than remaining independent, impartial interpreters of law. Yet consider the number of scandals regarding the politicalaffiliations of justices and their families, as well as significant personal and financial conflicts of interest. It seems currently that lifetime appointments are not sufficient to counteract the ways in which the justices’ interests, political, personal, or financial, may affect their rulings. The salary and the prestige of the position may not be enough for those individuals ambitious enough to rise to such a high rank.

So perhaps we need to emphasize the way in which Hamilton and the framers of the Constitution described the tenure of justices. Federal justices are not explicitly given “lifetime appointments.” Instead, their tenure is described as lasting throughout “good behavior.” Rather than overhauling the Court, perhaps the solution is simply to hold the justices to that standard. We should demand they at least attempt to be impartial, to avoid even the appearance of conflicts of interest, and step aside from those cases when we have reason to believe their judgment may be biased. If their tenure is to only last during times of good behavior, then we ought to remove those justices unwilling to behave as those with a good judicial character would.

Of course, this requires legislating in good faith and seriously scrutinizing the behavior of even those justices whose rulings we find agreeable. But if we are to have an institution whose members are to interpret the law impartially, we must impartially hold them accountable.

IVF and the Embryo’s Relationship to Human Life

image of in vitro fertilization

On Wednesday, June 12th, members of the Southern Baptist Church (SBC) attending their annual convention voted express opposition towards in vitro fertilization, or IVF. This is a series of procedures aimed to cause pregnancy for couples or individuals experiencing difficulties with fertility. The final steps of the process involve fertilizing egg cells in a lab, then implanting resulting embryos to result in pregnancy. Both because embryos may fail to implant, and because couples may hope to have multiple children through IVF, clinics produce multiple embryos. Those that go unused are either frozen or destroyed. The U.S. Department of Health and Human Services (HHS) estimates that there are 600,000 frozen embryos in the U.S., while the National Embryo Donation Center puts this figure at 1.5 million. (It is comparatively more difficult to attain figures on how many embryos are destroyed.)

It is worth looking at precisely what the attendees of the convention resolved. According to the organization’s summary of the meeting, attendees, given that IVF involves the creation, storage and destruction of embryos that will not be born, endorsed:

That the messengers to the Southern Baptist convention… call on Southern Baptists to reaffirm the unconditional value and right to life of every human being, including those in an embryonic stage, and to only utilize reproductive technologies consistent with that affirmation especially in the number of embryos generated in the IVF.

Further, they call on Southern Baptists “to advocate for the government to restrain actions inconsistent with the dignity and value of every human being, which necessarily includes frozen embryonic human beings,” and “promote adoption as one way… [for couples] to grow their families and [ask prospective adoptive parents] to consider adopting frozen embryos.”

Why propose this resolution now? It seems to make explicit what was previously just implied. In February, the Alabama Supreme Court ruled that, according to state law, the fertilized embryos created from IVF are children. Following the ruling, three IVF clinics in Alabama suspended operations, prompting state lawmakers to craft legislation granting civil and criminal immunity to those involved in providing IVF treatment. At the Federal level, IVF has become a political football. Republicans in the Senate proposed legislation that would withhold Medicaid funds from any states which pass legislation banning IVF – legislation blocked by Democrats. Instead, Democrats favor a bill which would prevent states from restricting the procedure and require insurers to cover it. Only two Republicans voted to take this later bill to the floor, causing it to fall short of the 60 votes necessary to proceed.

Some context on this issue may be illuminating. According to the Pew Research Center, 10% of women in the U.S. self-report having received fertility services. The U,S. Department of Health and Human Services reports that in 2021, 2.3% of all infants born in the U.S. (86,146) were conceived using IVF. So, restrictions on IVF stand to impact a significant number and perhaps prevent tens of thousands of births a year. Further, public sentiment is in favor of IVF. According again to Pew, 70% of subjects surveyed say access to IVF is a good thing and only 8% say it is bad. The least approving groups were white evangelical Protestants and those who self-describe as Republicans, 63% of whom approve.

Of course, the common occurrence and popular endorsement of a practice does not make it moral. Slavery, blood sports, and ritualistic sacrifices were historically common but we now condemn these as obviously wrong. So, we ought to consider the merits of the moral arguments against IVF, particularly those of the Southern Baptists.

The position advocated for by the SBCs seems to stem from a common starting point in many debates about reproduction – the idea that life begins at conception. The argument, in the context of IVF seems to go something like this: Human life begins with an embryo. The process of IVF produces embryos that are frozen indefinitely or destroyed. It is wrong to end or refuse to allow a human life to continue. Let’s call this argument the Embryo Personhood View or EPV.

This argument relies on several potentially questionable premises. For instance, one might wonder whether it is always wrong to end a human life – we may find it justified in the context of self-defense or perhaps triage. Further, the concept of human life is somewhat underexplained; perhaps what we are really concerned about are a being’s psychological capacities, not whether it is a human organism. Regardless, I think it is worthwhile to unpack the EPV in order to determine SBC’s theoretical commitments.

In particular, we should consider the statement that human life begins with an embryo. When we begin analyzing it, what it means may become less clear. Consider the fact that a plant begins with a seed. This statement tells us that a seed is necessary to get a plant but that more is required – you need a viable seed, nutritious soil, sunlight, and water.

Do the Southern Baptists believe that human life begins with an embryo in the sense that an embryo is necessary for human life? Certainly, they must believe this; you cannot have a new human life without first having an embryo. But this cannot be all that “human life begins with an embryo” means. First, many things are necessary for human life that seem to lack moral significance. Chemicals like carbon, oxygen, and hydrogen are the necessary building blocks of our bodies; yet they do not have unconditional value or rights. Second, the biological materials that produce an embryo – namely, sperm and egg cells – are also necessary for human life. Do these cells have a similar dignity and value? Are they the proper subject of government regulation?

Perhaps instead the SBC’s view is that the embryo is sufficient for human life. When thing A is sufficient for thing B, that means A is enough to cause B. Getting 100% on a test is sufficient to pass it; you will certainly pass the test with a perfect score! But it is not necessary to pass, as you could pass with a lower score. So, in this case, an embryo being sufficient for a human life means that once we have an embryo, we have a being with a right to life. While this avoids some of the strange implications of the necessity view, it is not clear that this is a defensible position, nor one that the SBC actually holds.

First, there is the matter of context. An embryo is normally sufficient, at least in some sense, to produce a living human organism. When conception occurs inside the body, and the zygote develops into an embryo, this starts a process. Unless this process is interrupted by some means, the end result will be a morally valuable human being. Of course, it’s worth noting that the process may be interrupted by natural means; the embryo may fail to implant, it may be non-viable, there may be a miscarriage, etc. However, an embryo in a lab seems importantly different in the sense that it is not currently in this process. If left to its own devices, it simply will not survive. Thus, the circumstances of a frozen embryo make it seem comparatively less plausible that it is sufficient for human life; its circumstances are abnormal for an embryo.

Second, there is a matter of consistency. Considering the view that an embryo is a person, Dustin Crummett asks us to imagine the following case: A fertility clinic catches fire. One part of the building contains hundreds of frozen eggs. A five-year-old child is trapped on the other side. Who should firefighters save first? Clearly the five-year-old. But this suggests that embryos lack the same rights and dignity as humans. Otherwise, saving the frozen eggs would seem a more compelling course of action. In fact, it should be an obvious choice; there are literally hundreds of embryos, so if their lives are valuable, the moral reason to save them should be hundreds of times greater than the moral reason to save the five-year-old.

There is something generally puzzling about the positions staked out by the SBC when considered in totality. As noted earlier, the resolutions approved at the convention state that embryos have a right to life. They also promote adoption for couples struggling with fertility and ask them to “consider” adopting frozen embryos. Suppose embryos have a right to life. For a frozen embryo to live its life, it must be implanted into a person and develop in utero. Compare this to an already living child in the adoption system. Certainly, it would be better for a child in this position to be adopted into a loving family, but they will still survive if not. We cannot say this for a frozen embryo. Thus, it seems that the SBC should be imploring members of its Church to attempt to adopt frozen embryos. To merely ask them to consider this option suggests that their actual view of an embryo’s moral standing is less than what the resolutions explicitly claim.

Ultimately, the SBC’s position on IVF has to overcome some challenges. If they think life beginning with an embryo means an embryo is necessary for human life, then either their position is trivial or it goes too far. Yet, they may instead mean that embryos are sufficient for human life. Yet in the context of IVF, this claim is dubious. Further, this claim seems at odds with other positions that the SBC posits in their resolutions.

Views we posit in debates about reproductive rights have far-reaching implications. We often make claims about what rights we have over our own bodies, when we may permissibly end another life, and what precisely it is that makes a living organism worthy of moral consideration. As a result, it is always advisable to think carefully about what your views imply in other contexts, lest you commit yourself to a position you do not actually accept.

Is College Worth It?

photograph of college commencement

It’s not a new question, but it’s been receiving renewed attention after a recent analysis circulated online. According to a study from “The Foundation for Research on Equal Opportunity” (FREOPP), there are a number of popular bachelor’s and master’s degrees offered at schools in the U.S. that have a low or negative return on investment (ROI) which “leave students worse off.”

The calculation is a simple one: a college degree is an investment, as it costs money and time. People with college degrees have, in the past, typically made that money back long-term, since careers that require college degrees tended to pay higher salaries than those that didn’t. But with rising costs of college tuition and many well-paying careers no longer requiring college degrees, these days one may be better off, at least in terms of long-term earnings, to skip college altogether, rather than go to college to study certain subjects.

Some of the degrees identified in the study were perhaps surprising – many MBA programs, for example, provide an overall poor ROI according to the analysis. Others were less surprising, as they fit into the stereotype of degrees that aren’t “worth it”: degrees in fine arts, humanities, and education, for example, were identified as having low or negative ROIs.

Although it’s been reported on by numerous media outlets, the FREOPP’s study has not gone unchallenged. However, even if we take the results at face value, what should we do with them? The authors of the study argue that prospective students have a right to know about the ROI of a program they’re interested in pursuing, and that information about a program’s ROI should even be used to inform policy in the form of scholarships and bursaries.

I think we should do something different with the study: we should ignore it. Far from being useful information, focusing too much on ROI can have negative consequences.

There is an obvious concern with talking about which degrees are worth pursuing purely in financial terms: there are clearly other, non-financial benefits that come along with earning a college degree. This is perhaps especially the case for careers that may have comparatively lower earning potential but are seen as more rewarding by students who have certain interests.

This does not go unnoticed by the authors of the FREOPP report, who cite that the “joy factor” is something that needs to be considered when choosing a degree to pursue, and that degrees with low ROIs can nevertheless produce significant social benefits. At the same time, the report also claims that it would be “irresponsible for defenders of negative-ROI programs to use “social benefits” as a catchall excuse for poor performance,” while also claiming that “programs which generate large social benefits also come with significant private rewards.” The argument, then, is that if it is the case that when pursuing a degree with a low ROI one does produce significant social benefits, that investment will pay off, since producing social benefits, in turn, produces (presumably monetary) rewards.

Whether this is true depends on how we define a “social benefit.” There are clearly cases where social benefits are rewarded – the report’s example is that of someone trained as a biologist (another field identified as having an overall low ROI) contributing to the development of a life-saving vaccine. Conspicuously missing from this discussion, however, are less tangible social benefits that are more likely to be produced in the more stereotypically “underperforming” degrees, such as those that come about from contributions to the arts. While some of these contributions may also be accompanied by “significant private rewards” this is certainly not always the case.

Rather than acting as an “excuse,” then, a more inclusive and less obtuse interpretation of “social benefits” may very well on their own compensate for a lower ROI from degrees with so-called “poor performance.” Indeed, a fundamental issue with assigning any type of value to something like a college degree is that one’s preconceptions about what should constitute that value will taint any such calculations.

Solely calculating benefits in terms of the long-term financial wellbeing of individuals also ignores the value that lies in a society that encourages a variety of pursuits. Will you make more money learning how to program computers than learning how to paint? Probably. But is a society consisting exclusively of computer programmers one we should pursue? Probably not.

Information about the ROI of college degrees is also not useful for policy recommendations; indeed, it will likely cause more harm than good.

The FREOPP report notes that “[a]round 29 percent of federal Pell Grant and student loan dollars over the last five years were used at programs that leave students with a negative ROI,” and that such results “point to a role for federal policymakers in improving the ROI of higher education.” The thought here is that other stakeholders – the government, perhaps, or taxpayers, depending on the type of subsidy provided – ought to know about the ROI of programs they are helping students attend so that they can determine if their investment is really worth it.

But the implications of this kind of recommendation are potentially chilling. It is not difficult to envision a policy where, for example, Pell Grants are only provided to students who enroll in a degree that has been declared “worth it.” Since such grants are given to low-income students, it would essentially gatekeep entire swaths of academic pursuit to only allow the participation of the already well-off.

Instead of a recommendation at the policy level, isn’t information about ROI still useful when it comes to individuals trying to decide what they want to study in school? The report cites another report that claims that the primary motivation of most college students is to get a good job that will pay them well. While there are certainly conversations to be had about what college really is “for,” and whether the primary concern of students when pursuing a higher education should be trying to get training for the workforce, it is undoubtedly the case that students are concerned with this. Surely, then, knowing the ROI of a college degree will help them make that decision.

Will it, though? From the FREOPP report, engineering and computer science are listed as the “best financial bets,” while the fine arts are the worst. Is this surprising information? Today’s high school students are likely all too familiar with what is sometimes seen as a myopic focus on STEM careers and the monetary rewards associated with in-demand careers in tech. It is unlikely that many are shocked to learn that artists make less money.

There are, however, two potential takeaways from the information in the FREOPP. One the report itself gestures at is that tuition fees for some programs and schools are too high. Since the cost of tuition is potentially a significant factor in determining ROI, lower tuition fees would result in higher ROIs.

A second takeaway is that if ROI is a significant concern, then this is simply an indication that workers need to be paid more. It has been well-documented that, despite increasing productivity over decades, wages have not kept up. Combined with increased tuition fees, this means that regardless of what one chooses to study in college, one’s ROI will inevitably continue to decline.

OneLove?

What obligations come with representation?

The SAT and the Limitations of Discrimination

In 2020, at the height of America’s pandemic-fueled racial reckoning, numerous colleges and universities dropped standardized tests as an admission requirement. No mere PR move, such action was supported by influential anti-racist activists such as Ibrahim Kendi, who declared, “Standardized tests have become the most effective weapon ever devised to objectively degrade Black and Brown minds and legally exclude their bodies from prestigious schools.” Racial gaps in SAT scores persist to the present. Yet, in the past several weeks multiple prominent universities, including Brown, Dartmouth, Yale, and UT Austin, have reinstated standardized testing as an admission requirement. Their reasoning — combating inequality.

The schools argue that careful use of standardized testing, in concert with other factors, can help to identify promising applicants who would otherwise be overlooked. Recent research has also affirmed that standardized test scores are predictive of performance, especially at highly selective universities. Moreover, standardized tests seem to be less biased than other more impressionistic aspects of the college admissions process like letters of recommendations and essays.

But all this does not necessarily vindicate the SAT. It can still be biased, even if less biased. And one can still find standardized testing too narrow an evaluative tool, even if acknowledging that more holistic methods or lottery-based approaches to admission have their own problems. However, the saga also reveals the very different ways we choose to measure and explain “inequality” in the first place.

One approach is to focus on discrimination. If one is committed to the belief that racial disparities are generally caused by discrimination, then the racial gap in test scores becomes evidence of that discrimination, and the tests emerge as the problem. Standardized testing reflects societal biases.

But racial inequality in America isn’t merely a matter of differential treatment; it is also a product of differential resources. Home ownership rates, family income, wealth, school funding, exposure to environmental toxinsall vary by race. If we believe these structural features impact standardized testing (and we should), our perception shifts from focusing exclusively on discrimination to a wider view of how resource inequality also shapes the picture. What follows from this shift in focus?

First, it requires us to admit the racial and socioeconomic achievement gap as measured by standardized tests at least partly reflects a real gap in the abilities those tests measure. This certainly does not imply these gaps are innate, nor that discrimination is not real, nor that standardized tests are the best measure of societal value. The concern is that by the time someone is taking the SAT at 16, harms from poverty, deprivation, and inequality have already accrued. Some of these harms, such as a lack of access to nutritional food or a lack of knowledge about test taking, can be addressed fairly easily. Other harms, for example exposure to allergens or environmental toxins, such as lead due to substandard housing, may cause lifelong negative effects.

It might be objected that while the gap in abilities measured by standardized tests is real, the abilities themselves are rather artificial — that these tests measure test taking and nothing more. Historically, the SAT stood for Scholastic Aptitude Test, with the implication it measured something like innate potential. In the 90s, it was rebranded to replace Aptitude with Assessment (it is now simply SAT). The question of what precisely standardized tests are measuring is complicated and controversial. However, the fear from a resource inequality perspective is that if differences are truly deep and structural with far reaching implications, then we should expect to find these differences emerge across many kinds of evaluation. This is a statistical claim about the overall effect of inequality. It does not imply that childhood environment is destiny or that there cannot also be benefits, to mentality, insight, or what have you, from a less privileged upbringing.

Second, resource inequality highlights a tension between two different missions of education. On the one hand, higher education, especially elite education, is a means of meritocratic selection, picking out those currently succeeding in K-12 American educational institutions and providing them additional opportunities and resources. On the other hand, education is a means of social uplift, by which people can allegedly transcend difficult circumstances and build a better life for themselves. But what if meritocratic means of selection themselves reflect and reinforce difficult circumstances? In fact, if resource inequality is causing a real effect, then we should expect a standardized test – even one with no discrimination whatsoever – to perfectly recapitulate an unequal society. If education is to be ameliorative of inequality, then institutions of higher education must accept different ability (at least at the time of evaluation) even on a fair test. Although, as previously discussed in The Prindle Post, this does not mean that these students are unqualified.

Finally, moving beyond discrimination to unequal resources challenges our understanding of societal change. If we believe the racial achievement gap to reflect discriminatory testing practices, then the natural solution is to change (or eliminate) the test. Better yet is to eliminate the prejudices behind the discrimination through educating ourselves and each other. But what if the racial achievement gap reflects instead the distribution of resources across society? What if people’s starting place is the most significant factor in determining SAT performance? The solution becomes far more ponderous. It may be rebutted that resource inequalities are still ultimately the result of discrimination, merely past discrimination, but this misses the point. For regardless of how we characterize the ultimate historical causes, correcting present discrimination will not automatically address the enduring impacts of the past. Of course, discrimination and material resources interact in complex ways: a lack of resources can lead to differential treatment, and differential treatment to a lack of resources. A natural hypothesis is that challenges for minorities which are redistributed by birth every generation (e.g., women and LGBTQ+ individuals) – and therefore don’t accumulate material disadvantage the way racial minorities can – may be better addressed by tackling discrimination and ideology, whereas resource inequality may require more redistributive solutions. As for the SAT, even if judicious use is an improvement to college admissions without standardized testing, we should not expect it to overcome the limitations of an unequal society.

The Ethics of Conscription

photograph of military boots and fatigues standing in line

As the conflict wrought from the occupation of Ukraine enters its third year, the nation struggles to find warm bodies for the front. Its leaders consider an expansion of the draft. Russia, too, suffers war fatigue as their conscription fueled invasion trudges on. Surrounding nations, eyeing the conflict and their own military limitations, mull expanding mandatory military service. Seeking to deter Russian hostilities, Latvia reintroduced conscription as of January 1st. Serbia, historically close with Russia but studiously non-committal on the issue of Ukraine, reopened discussions of conscription this January as a way to ensure military preparedness. Even  Germany — long gun-shy about all things military — has been reconsidering mandatory service, formally ended in 2011.

Russia and Ukraine are focused on the draft to sustain a war effort. Latvia, Serbia, and Germany are considering a general requirement to engage in military service, in peace times as well as during war.  One situation is certainly more emergent than the other, but both assert the government’s right to send its citizens (without consent) to fight and die. How might we justify such incredible power?

The most straightforward justification is that it is simply part of the deal. The “state,” the political institution which reigns sovereign over its people and territories, provides certain privileges and protections. In return, it can impose obligations on its people: taxation, jury duty, mandatory military service, what have you. Under this analysis, the legitimacy of conscription stems from the general political legitimacy of the state and its coercive powers.

A potent concern is consent. How can we justify the state’s power of conscription if people did not explicitly consent to it? This concern echoes across all the state’s coercive powers, but it is especially acute for military service where so much can be on the line. The most historically influential response by philosophers is essentially hypothetical consent. The idea is that, understanding the situation, a reasonable person would agree to be governed by the state and hence consents in theory. This is hypothetical consent to be governed, not necessarily to conscription specifically. But if we agree that a reasonable person would consent to be governed, consent to abide by decisions made through the political process, and consent to the protection provided by the state, then conscription is not far away. However, hypothetical consent clearly has its limitations: Imagine the absurdity of hypothetical consent as a defense in cases involving sexual harassment. Moreover, consent typically implies respect for the individualness of personal decisions (regardless if others may judge them as unreasonable).

One might also, while not objecting to coercive powers of the state generally, take issue with conscription specifically. If government is understood as existing partly to protect certain rights, life among them, then conscription would seem antithetical to the very nature of government. Although one may respond that the government needs to infringe the rights of some, to protect the rights of many. Governments can also provide more flexibility. For example, many European countries with mandatory service (such as Austria), provide a choice of military service or civil service.

If conscription can be justified as something citizens owe to the state, an implication of this is that the state needs to hold up its end of the bargain. A state that serves its people is best positioned to ask for service in return. A corrupt or tyrannical state, an unjust war, all these might undermine the legitimacy of conscription. Perhaps unsurprisingly, countries have often adopted a carrot and stick approach to compulsory military service. Revolutionary France, the birthplace of modern conscription, also ensured that military service provided a path of advancement for those serving. In the United States, the GI Bill, initiated at the end of World War II, provides extensive support for education for veterans.

Along these lines we may also worry about a mismatch between who benefits from the state and who pays the price of conscription. During the Vietnam war, the poor and minorities were far less able to avoid the draft than those with more resources. This unfairness is immortalized in the art and music of the time, such as Creedence Clearwater Revival’s “Fortunate Son” or Freda Payne’s “Bring the Boys Home,” which was written in response to the disproportionate deaths of Black Americans.

Alternatively, we may justify conscription (and indeed, the state generally) on the basis of utility — that it provides the most good to the most people. Clearly, mandatory military conscription, especially in times of war, comes with risks. But it can also come with benefits, e.g., enabling a nation to fight off an invader that could otherwise lead to far larger casualties. Arguing for conscription on the basis of benefits, or even necessity, is clearest in a moment of humanitarian crisis. More generally, the challenge is not whether conscription can come with benefits, but whether it is legitimately the best option for the people.

Can changes be made to increase voluntary recruitment? Can technology be used instead of soldiers? Can new alliances be made? In short, is mandatory military service truly the least injurious option? Using benefits to the people as our metric also places war itself in the crosshairs. Some wars, such as repelling invasion, are of uncontroversial public benefits. Other wars — Vietnam, again, is a notable example — seem to be in service of the government but not necessarily its people.

Perhaps we shouldn’t expect conscription to have a clear moral justification at all. The historical roots of conscription lay not in ethical analysis, but military expediency. In early 1800s Europe, when European governments had achieved a level of control and centralization to carry out conscription, it simply became a fact of war. This is not to say ethical reflection on the matter is not valuable, nor that it can never be justified, nor that there are not better and worse ways to implement conscription. But is a general moral justification what we should expect?  Or is it more likely that conscription is often just a government tactic in need of a moral fig leaf?