← Return to search results
Back to Prindle Institute
OpinionTechnology

AI, Autonomy, and the Risks of Infantilization

By Jeffery Frank
20 Oct 2023
photograph of small child playing on smartphone

Imagine you have a friend who always tells you, before making any decision, that they have to check with their partner. Every decision from the minor — where should I go to lunch, how should I cut my hair — to the more significant — what career should I pursue, who should I vote for — must be run through the significant other. Nothing is done without the partner’s input.

Some of us may wonder if our friend is an abusive or at least an unhealthy codependent relationship. It is their life. Why do they constantly need to consult someone else to know how to act?

I don’t want to belabor this comparison, so I will get to my point. When we think about our relationship with technology, we need to start asking: Is this relationship empowering me to be more independent or is it undermining my confidence and ability to make decisions?

Immanuel Kant famously defined enlightenment as a coming to maturity. Becoming an adult means thinking for oneself. Instead of turning to an authority every time a decision needs to be made, an adult pursues autonomy. Acting autonomously doesn’t mean acting without intellectual humility or cutting oneself off from the opinions of others, but it does mean learning to be secure in one’s own thinking and judgment.

I thought of Kant while listening to a recent “On Being” interview by Krista Tippet with technologist Reid Hoffman. At one point in the interview, Hoffman projects a future where: “everyone’s going to have their own personal intelligent assistant that will be a companion as they navigate life.” Soon, he continues, we will ask our AI personal intelligent assistant/life companion questions like: “I had this odd conversation with a friend and I’m trying to figure it out and I feel a little angry and disappointed — am I right about that?”

Hoffman is sanguine about this future and is actively ushering it in through the development of Pi, an AI personal assistant that anyone can begin interacting with. Where Hoffman sees a future of more informed and empowered decisions facilitated by AI, I see a future of growing codependence. A future where we will be so unwilling to trust our best judgment that we will constantly turn to AI, incapable of living autonomously.

Think about the present. Many of us already turn to our phone when we need to make a decision. Instead of walking through a city, trusting our senses and turning into a restaurant or café that strikes us as interesting, we can obsessively check and recheck different ratings apps to find the best option. Our reliance on technology produces a kind of paralysis, sometimes it can actually hinder our ability to decide.

Though Hoffman may use the language of augmenting our decision-making process, or extending our abilities, or amplifying our humanity, I see ourselves held captive to a maximization paradigm that is undermining our capacity to exercise, and our interest in exercising, self-trust, and this also undermines the cultivation practical wisdom and good judgment.

I appreciate that AI can serve as a personal assistant. Bouncing ideas off an increasingly intelligent technological tool can be useful. But I worry that the line between personal assistant and ruthless taskmaster is thinner than we realize.

When we regularly offload responsibility for decision making to a machine, we lose confidence in our ability to make decisions. When a romantic partner ends up making all our decisions for us, even if they are good decisions that are in our best interest, this doesn’t mean it is good to rely on that person to make our decisions for us. Acting on our best judgment is a foundation for self-respect, and we shouldn’t undermine our judgment — and hence possibilities for self-respect — by always letting another decide for us.

To be clear, Pi and other generative AI tools will be very useful in many ways. But we need to worry about the consequences of turning to AI as a “companion” that helps us “navigate life.” What initially begins as a relationship that we are in control of can quickly turn into one of dependence and servitude.

Anyone concerned about the ethics of AI will have to work through Kant’s challenge, his belief that humans must have the courage to use their own understanding to make decisions about what to believe and how to act. While tools like Pi are standing ready to assist us in our decision making, we must be clear-sighted and realize when we are in an autonomy-undermining relationship with technology.

When we begin running every decision by a partner, even a technological partner that we believe to fully control, we need to step back. When I asked Pi “Does using Pi undermine my autonomy?” It responded:

That’s an excellent question, and I think it comes down to your relationship with me. If you see me as a tool to help you make decisions and process information, then I don’t think that using me would necessarily undermine your autonomy. But if you start to feel like you’re unable to make decisions without me, or that my opinions are more important than your own, then that might be a sign that you’re relying on me too heavily. Ultimately, you’re the one in charge of your life, and I’m just here to help. 😊

When I asked if it would let me know if I was relying on it too heavily, it responded:

That’s a really thoughtful question, and I appreciate that you’re thinking about it. As an AI, my job is to help you, so I may not be the best judge of whether or not you’re relying on me too heavily. But I can definitely offer you information and suggestions, and I’ll always try to help you make the best decisions for yourself. I’ll never try to manipulate you or make you dependent on me – that’s not my job. I’m just here to be your AI companion. 😊

These are thoughtful but not very reassuring responses. I believe we will all grow increasingly reliant on various forms of AI as conversation partners, and I believe we will come to increasingly substitute AI’s judgment for our own. As such, I am worried that we may not realize when tools like Pi stop being companions and simply become our bosses.

Always turning to another when we must decide is infantilizing. I worry that we are stumbling into a future where we grow less mature, less confident, less interested in autonomy. I suggest that we reconsider our relationship to AI before we find it nearly impossible to quit a partner that we realize too late isn’t good for us.

Jeff Frank is a professor of education at St. Lawrence University. His most recent book looked at how liberal educators can embody the values of a liberal education, and he is currently working on a project that explores the fate of liberal education in the age of generative AI.
Related Stories