The AI Crisis and The Unexamined Life
One question that I return to over and over again is this: What am I doing? Beneath this question are more questions. What do I want to do? What should I do? What can I do? Is any of this worth doing? Does what I want matter, or should I do what others want me to do? What needs to be done?
This line of questioning takes us back to the Apology of Socrates. As Socrates is pleading his case to the court about whether or not he is guilty of the charges brought against him (corrupting the youth, claiming to be wise about that which he isn’t, teaching new gods in place of the old ones), he says the following:
And on the other hand, if I say that this even happens to be a very great good for a human being–to make speeches every day about virtue and the other things about which you hear me conversing and examining both myself and others–and that the unexamined life is not worth living for a human being, you will be persuaded by me still less when I say these things. This is the way it is, as I affirm, men; but to persuade you is not easy.
From this the familiar philosophical dictum arises: the unexamined life is not worth living.
While this slogan has stuck with me from the earliest days of my philosophical career, I find myself thinking about it more than usual these days as we continue to barrel through the AI Crisis. This is because AI is encroaching on uniquely human activities; whether real or mimicry, AI emulates thought, rationality, creativity, and agency to such a degree that we can outsource a great deal of our lives to it.
The AI Crisis is unfolding each day and its magnitude as a crisis is currently unclear. There are several components to the crisis, each of which is being debated and analyzed by scholars, technology experts, users, and everyone in between. We can call it a crisis because its scale of influence is global, its potential impacts are wide ranging, some of the worst-case scenarios are plausible, and the spheres of impact have serious stakes.
While the list below is incomplete, we can currently identify four major components of the AI Crisis.
The first component of the crisis is job displacement.
One of AI’s most marketed value propositions is that it can simulate the function and output of intelligence and that this is good for business. If AI can do this with any modicum of success, it threatens virtually any job that requires human intelligence.
Within the debate about AI is a question about the label “intelligence.” AI evangelists are quite serious about the term and many think AI really is intelligent, and if it is not now intelligent, it eventually will be. AI skeptics lament the term and argue that AI intelligence is an illusion and that any modicum of intelligence we can ascribe to AI really ought to be credited to the human labor powering AI and making it run smoothly.
At least for immediate job displacement, whether the intelligence is real or a façade is somewhat irrelevant. All that is required for the threat of job displacement to actualize is that people believe AI is intelligent. If they believe it to be the case, then this is enough to get the ball rolling on finding reasons to remove human workers from the loop. Right now, experts are mixed on whether or not AI is contributing to job displacement, but there is concern that it is inevitable.
Whether AI is able to live up to its labor-replacing hype will determine its long-term viability as a worker replacement. But even if it falls short of its promised performance, we should not underestimate the possibility that society will accept less than ideal labor if it saves someone money.
The second component of the AI crisis is wealth distribution.
The promise of AI is centered around doing more with less. There is a race to figure out precisely how to use AI so that we can accomplish what we want to accomplish with less people, less resources, and less ability. Whoever cracks the code first stands to gain an enormous amount of wealth, and this is something we are already seeing unfold. It is a capitalist dream to have a labor force that does not need to be paid wages, given sick leave or vacation, or provided breaks – there are no human needs to take their attention away from work.
The third component is about political domination.
AI systems cannot be separated from our political reality. AI systems are becoming intertwined in our social infrastructure at a multitude of levels. It is infused in our word processing and data processing software, our search engines and our emails, into our phones and computers. By virtue of being bound up in our information and communication technologies, it will inevitably shape what we see, what we know, how we talk, and how we think. AI is being used by the military, by police force, and by immigration enforcement. It is used by lawmakers and the courts. And, as every educator knows, it is being used by our students who are quite literally the future.
If AI were politically and morally neutral, this would be less alarming; but AI is not politically and morally neutral. It must be given values and red lines; it must be told what it is allowed to say and what it isn’t allowed to say. And, as is well established by now, the data that it relies on itself contains an immense amount of bias and morally coded information. Insofar as AI is generating profit, all of its operations must also somehow conform with the endeavor to do so.
I am not using the term domination flippantly. By domination, I am drawing from the work of Philip Pettit who identifies domination as a kind of arbitrary power that one has over another in the sphere of interpersonal relationships.
One way AI political domination occurs is through AI ownership. Insofar as a small handful of AI systems dominate the information and communication sphere, our way of understanding our political reality will be shaped by those who control and shape these AI systems. Another way is more diffuse and arises as a result of the sprawling and unpredictable nature of AI systems. Insofar as AI systems may not be fully intelligible or controllable by its makers or those who seek to control and regulate these systems, we will be dominated by an AI infused informational reality where all of us are subjected to the interpretations and nudges of AI systems.
The fourth component of the AI crisis is about responsibility.
AI threatens responsibility at several levels. First, AI will make it more difficult to track causal responsibility, which is simply the tracing of who or what is responsible for bringing about a particular state of affairs (something that is already remarkably difficult without considering the role of AI).
When AI is involved in a decision-making procedure and a mistake is made, AI muddies the causal waters in a particularly confusing way because it mimics agency. When someone is wrongly identified by AI as a wanted person, it is clear that AI is in part causally responsible. But so are the people who designed the AI system, the people in charge of updating the code that runs the AI, and the officers who used the system to make the arrest in the first place. The problem we face will only grow as AI continues to be embedded into our social structures.
Second, we lose track of moral responsibility. When an AI agent is involved in a decision-making procedure and something goes off the rails morally, it is not clear cut where we can place the moral blame. While the human in the loop strategy attempts to eliminate this moral ambiguity, the implementation of this strategy has serious limits. If the use of AI becomes standard practice, and AI is deemed as generally trustworthy, and a morally condemnable act happens as a result of the use of AI, it’s hard to understand how we can blame the human in the loop when they were trusting the system they have been socially conditioned to trust.
Third, we lose track of merit responsibility. What I have in mind by merit is simply the responsibility we give people for doing good acts and producing interesting or praiseworthy things in the world. This applies to artists and athletes as much as it does to the romantic partner or friend, or the hardworking employee and studious student. When the musician who studied music theory and put countless hours into practicing their instrument records tens or hundreds of takes, and meticulously mixes the final product, it is clear who deserves the praise and merit for creating music. When someone with an idea for a song uses AI to produce all the instrumentation, refine the lyrics, and structure the melody and pacing of the song with all of its varying sections, and ensures all the instruments are in the right key, the placement of merit is less clear. For what, if anything, is the prompt engineer praiseworthy for?
Back to the Examined Life
One of the unifying features of these components is that the AI crisis is ultimately an existential one. As I ponder the AI crisis, I am led back to some of the most basic philosophical questions: What are we doing here, what kind of lives do we want to live, how should we spend our time, what kind of world do we want to build for ourselves and each other?
Instead of waiting to see where the AI dust settles and just how far the crisis will go, we can insulate ourselves now by remembering the wisdom of Socrates. AI is wrapped up in human activity and how we spend our time. It is wrapped up in questions of morality and virtue and responsibility and meaning. It is bound up in the Socratic dictum that the unexamined life is one not worth living.



