Back to Prindle Institute

Incentive, Risk, and Oversight in the Pork Industry

photograph of butcher instruction manual with images of different cuts of meat of pig

On September 17th, the U.S. Department of Agriculture announced an updated rule set for pork industry regulators; in addition to removing restrictions on production line speed limits, the Food Safety and Inspection Service (FSIS) will soon allow swine slaughterhouses to hire their own process control inspectors to maintain food safety and humane handling standards instead of relying on monitors. Critics argue that this move is an unconstitutional abuse of power that will likely lead to less secure operations, thereby increasing the risk to animals, workers, and consumers.

Under the current system, hog slaughterhouses are allowed to slaughter a maximum of 1,106 animals per hour (1 pig roughly every 3.5 seconds) and must operate under the watch of multiple FSIS employees. These inspectors review each animal at several points in the killing and disassembly process, ensuring their proper handling, and removing creatures or carcasses from the line that appear to be sickly or otherwise problematic. Notably, these monitors have the authority to both slow down and stop the production line in the interest of preserving sanitary conditions.

But under the New Swine Slaughter Inspection System (NSIS), the limit on per-hour animal slaughter will be removed and pork producers will be allowed to hire employees of their own to replace FSIS inspectors, thereby allowing the FSIS to reassign its monitors elsewhere. Proponents of the move suggest that this deregulation will promote efficiency without increasing overall risk. As Casey Gallimore, a director with the North American Meat Institute (a trade organization supporting pork and other meat producers) explains, the industry’s new hires will be highly trained and FSIS inspectors will still have a presence inside farming operations; whereas a plant might have once had seven government monitors on its production line, “There’s still going to be three on-line [FSIS] inspectors there all of the time.”

Overall, industry groups estimate that, under these new rules, as much as 40% of the federal workforce dedicated to watching over the pork industry will be replaced by pork industry employees. Given that a 2013 audit of FSIS policies indicated that their current implementation was already failing to meet expectations for worker safety and food sanitation, it is unclear how reducing the number of FSIS employees will improve this poor record.

For critics, removing speed limits drastically increases the risk to slaughterhouse employees and introducing corporate loyalty into the monitoring equation further threatens to dilute the effectiveness of already-flimsy federal regulations on slaughterhouse management. Because industry employees will remain beholden to their corporate bosses (at the very least, to the degree that those bosses sign their paychecks), they will have fewer incentives to make decisions that could feasibly impact profitability – particularly slowing or stopping the production line. 

According to Marc Perrone, president of the United Food and Commercial Workers International Union (which represents at least 30,000 employees of the pork industry), “Increasing pork-plant line speeds is a reckless corporate giveaway that would put thousands of workers in harm’s way as they are forced to meet impossible demands.” The FSIS argues that available public data suggests that faster line speeds don’t threaten worker safety; currently, though, there is no national database specifically designed to track packing house injuries and accidents.

It might be the case that industry officials will be able to consistently promote the safety and security of the employees under their care, but a concern reflected by Socrates gives us cause to be skeptical. In Book III of The Republic, Plato has Socrates discuss the nature of the ruling guardian class in his idealized city; often called “philosopher-kings,” Socrates insists that, because the guardians are both naturally inclined to be virtuous individuals, and because they have been carefully trained within a structured society designed to promote their inborn goodness, then the guardians do not, themselves, need guardians of their own – indeed, one of Socrates’ interlocutors even jokes “that a guardian should require another guardian to take care of him is ridiculous indeed.” Centuries before Juvenal asked “But who is to guard the guards themselves?,” Plato argued that the best guards would not actually need guarding at all.

Later philosophers would lack Plato’s optimism; ethicists would construct normative systems with plenty of rules to advise the less virtuous, constitution writers would build layers of checks and balances into divided branches of government, and policy makers would indeed insist on impartiality as a necessary condition for truly effective monitoring. Unless the pork industry can provide us some reason to think that the NSIS inspectors they’ll soon be hiring have been “framed differently by God…in the composition of these [God] has mingled gold” (who have, furthermore, cultivated that virtue over a lifetime of study and practice), we have good reason to be skeptical that they do not, themselves, need watching.

For what it’s worth, Socrates also thought that the guardians should not be allowed to own private property, but that might really be asking too much of the pork industry.

The Persistent Problem of the Fair Algorithm

photograph of a keyboard and screen displaying code

At first glance, it might appear that the mechanical procedures we use to accomplish such mundane tasks as loan approval, medical triage, actuarial assessment, and employment screening are innocuous. Designing algorithms to process large chunks of data and transform various individual data points into a single output offers a great power in streamlining necessary but burdensome work. Algorithms advise us about how we should read the data and how we should respond. In some cases, they even decide the matter for us.

It isn’t simply that these automated processes are more efficient than humans at performing these computations (emphasizing the relevant data points, removing statistical outliers and anomalies, and weighing competing concerns). Algorithms also hold the promise of removing human error from the equation. A recent study, for example, has identified a tendency for judges on parole boards to become less and less lenient in their sentencing as the day wears on. By removing extraneous factors like these from the decision-making process, an algorithm might be better positioned to deliver justice.

Similarly, another study established the general superiority of mechanical prediction to clinical prediction in various settings from medicine to mental health to education. Clinical predictions were most notably outperformed when a clinical interview was conducted. These findings reinforce the position that algorithms should augment or replace human decision-making, which is often plagued by prejudice and swayed by sentiment.

Despite their great promise, algorithms carry a number of concerns. Chief among these are problems of bias and transparency. Often seen as free from bias, algorithms stand as neutral arbiters, capable of combating long-standing inequalities such as the gender pay-gap or unequal sentencing for minority offenders. But automated tools can just as easily preserve and fortify existing inequalities when introduced to an already discriminatory system. Algorithms used in assigning bond amounts and sentencing underestimated the risk of white defendants while overestimating that of black defendants. Popular image-recognition software reflects significant gender bias. Such processes mirror and thus reinforce extant social bias. The algorithm simply tracks, learns, and then reproduces the patterns that it sees.

Bias can be the result of a non-representative sample size that is too small or too homogenous. But bias can also be the consequence of the kind of data that the algorithm draws on to make its inferences. While discrimination laws are designed to restrict the use of protected categories like age, race, sex, or ability status, an algorithm might learn to use a proxy, like zip codes, that produces equally skewed outcomes.

Similarly, predictive policing — which uses algorithms to predict where a crime is likely to occur and determine how to best deploy police resources — has been criticized as “enabl[ing], or even justify[ing], a high-tech version of racial profiling.” Predictive policing creates risk profiles for individuals on the basis of age, employment history, and social affiliations, but it also creates risk profiles for locations. Feeding the algorithm information which is itself race- and class-based creates a self-fulfilling prophecy whereby continued investigation of Black citizens in urban areas leads to a disproportionate number of arrests. A related worry is that tying police patrol to areas with the highest incidence of reported crime grants less police protection to neighborhoods with large immigrant populations, as foreign-born citizens and non-US citizens are less likely to report crimes.

These concerns of discrimination and bias are further complicated by issues of transparency. The very function the algorithm was meant to serve — computing multiple variables in a way that surpasses human ability — inhibits oversight. It is the algorithm itself which determines how best to model the data and what weights to attach to which factors. The complexity of the computation as well as the use of unsupervised learning — where the algorithm processes data autonomously, as opposed to receiving labelled inputs from a designer — may mean that the human operator cannot parse the algorithm’s rationale and that it will always remain opaque. Given the impenetrable nature of the decision-mechanism, it will be difficult to determine when predictions objectionably rely on group affiliation to render verdicts and who should be accountable when they do.

Related to, but separate from, concerns of oversight are questions of justification: What are we owed in terms of an explanation when we are denied bail, declined for a loan, refused admission to a university, or passed over for a job interview? How much should an algorithm’s owner need to be able to say to justify the algorithm’s decision and what do we have a right to know? One suggestion is that individuals are owed “counterfactual explanations” which highlight the relevant data points that led to the determination and offer ways in which one might change the decision. While this justification would offer recourse, it would not reveal the relative weights the algorithm places on the data nor would a justification be offered for which data points an algorithm considers relevant.

These problems concerning discrimination and transparency share a common root. At bottom, there is no mechanical procedure which would generate an objective standard of fairness. Invariably, the determination of that standard will require the deliberate assignation of different weights to competing moral values: What does it mean to treat like cases alike? Should group membership determine one’s treatment? How should we balance public good and individual privacy? Public safety and discrimination? Utility and individual right?

In the end, our use of algorithms cannot sidestep the task of defining fairness. It cannot resolve these difficult questions, and is not a surrogate for public discourse and debate.


This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of our discussion questions, check out the Educational Resources page.