← Return to search results
Back to Prindle Institute
OpinionTechnology

ChatGPT and the Challenge of Critical (Un)Thinking

By Marta Nunes da Costa
2 Mar 2023
photograph of statue of thinking man

For the past weeks there has been a growing interest on ChatGPT, this new artificial intelligence language model that was “programmed to communicate with people and provide helpful responses.” I was one of the curious that had to try it and figure out why everyone was talking about it.

Artificial intelligence is not a new thing; at least as an idea it has some decades now, since it was firstly introduced in 1950 by Alan Turing, the British mathematician who is generally considered to be the father of computer science. Later on, in 1956, John McCarthy coined the term “artificial intelligence” in a conference, giving birth to a new field of study. Today, it is everywhere, we use it even without knowing and the advancements in the area create entirely new fields of inquiry, bringing along new ethical dilemmas that go from the discussion what (if any) moral rights to attribute to A.I., to designing new digital rights that encompass different milieus and that have political and legal consequences – see, for instance, the European Union attempts since 2021 to create a legal framework regarding the rights and regulations of AI for its use on the continent.

ChatGPT is something unique – at least for now. While a recent development, it seems almost too familiar – as if it was always there, just waiting to be invented. It is a Google search on steroids, with much more complexity in its answers and a “human” touch. Once you read the answers to your questions, what calls your attention is not only how fast the answer is provided, but also how detailed it seems to be. It mimics pretty well our ways of thinking and communicating with others. See, for instance, what happened when staff members at Vanderbilt University used it to write an email responding to the shooting at Michigan State – a well written 297-word missive which might otherwise have been well received. However, the fact that at the bottom of the email was a line that read as following: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” outraged the community. The Associate Dean of the institution soon apologized, saying that the use of the AI-written email contradicted the values of the institution. This is one (of no doubt many) examples of how the use of this technology may disrupt our social and cultural grids. This new tool brings new challenges, not only for education – how students and professors incorporate this technique into their practices – but also for ethics.

Contemporary models of education still rely heavily on regular evaluation – a common mission across educational institutions is to foster critical thinking and contribute to the development of active and responsible citizens. Why is critical thinking so valued? Because being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value. To judge something is not as easy as it may initially seem, for it forces each of us to confront our prejudices, compare it to reality – the set of facts common to all of us, what the world is made up – and take a stand. This process also moves us from our inner monologue with our self to a dialogue with others.

What happens when students rely more and more on ChatGPT to do their homework, to write their essays and to construct their papers? What happens when professors use it to write their papers or books or when deans of universities, like the example mentioned above, use it to write their correspondence? One could say that ChatGPT does not change, in essence, the practices already in place today, given the internet and all the search engines. But insofar as ChatGPT is superior in mimicking the human voice, might its greatest danger lie in fostering laziness? And shouldn’t we consider this laziness a moral vice?

In the Vanderbilt case, what shocked the community was the lack of empathy. After all, delegating this task to AI could be interpreted as “pretending to care” but fooling the audience. To many it seems a careless shortcut done for time’s sake. Surely it shows poor judgment; it just feels wrong. It seems to betray a lack of commitment to the purpose of education – the dedication to examine and think critically. In this particular context, technological innovation appears nothing more than a privileged means to erode what was supposed to contribute to, namely, thoughtful reflection.

While technologies tend to make our life much more comfortable and easier, it’s worth remembering that technologies are a means to something. As Heidegger well pointed out in an emblematic text entitled “The Question concerning Technology” (1954), we tend to let ourselves be charmed and hypnotized by its power; while forgetting the vital question of purpose – not the purpose of technology but the purpose of our lives, as humans. And while ChatGPT may be great for providing context and references on virtually any topic of research, we cannot forget that the experience of conscious thinking is what makes us uniquely human. Despite all appearances of coherent and well-ordered prose, ChatGPT is only mirroring what we, humans, think. It still does not have nor can mimic one thing: our emotions and our ability to respond in a singular manner to specific situations.

If we generalize and naturalize the use of this kind of technologies, incorporating into our daily lives, aren’t we making a choice of non-thinking in detriment of an instantaneous response that serves a strict utilitarian purpose? Heidegger says that “technology is a mode of revealing,” insofar what we choose (or do not choose) reveals the ways in which we are framing our world. And if we choose not to think – believing that something else can “mirror” our possible thought – aren’t we abdicating of our moral autonomy, suspending the human task of reflecting, comparing, and judging, and instead embracing a “dogmatic” product of a technological media?

Marta is a Professor of Philosophy at Federal University of Mato Grosso do Sul, Brazil. Ph.D. in Political Science (New School for Social Research, 2006) she does research on the intersection between politics and morals, as well as on the relationship between human nature, science and technology. In her last book entitled Democratic Despotisms (2022, Cambridge Scholars) Marta discusses the totalitarian tendencies in contemporary democracies and its anthropological, political and moral implications.
Related Stories