← Return to search results
Back to Prindle Institute

A New Kind of Risk?

We usually expect to be held accountable for our actions – for both results we intend, and those we do not. We expect, for example, that a car company will ensure that a vehicle doesn’t have major flaws that could result in serious harm before they sell it to customers. To not consider the risks would be negligent, and this is why recalls often look bad for such companies.

But what about algorithms? Should we have a similar expectation that a corporation that develops an algorithm to detect cancer or to detect whether someone is passing off AI-generated content as their own should be sure that there are no significant flaws in their product before they sell it? What if there is no way they could reasonably do so? Given that algorithms can generate erroneous results resulting in serious harms, what is a reasonable standard when it comes to product testing?

In one of the chapters of my forthcoming book on the ethics of AI, I consider a hypothetical issue involving ChatGPT and a professor who might use an algorithm to accuse a student of passing off ChatGPT-written work as their own. There are a great many ethical issues involved when we don’t understand the algorithm and how it might generate false positive results. This has already become a serious issue as students are now being falsely accused of handing in AI-generated work because an algorithm flagged it. A Bloomberg Businessweek study on the services GPTZero and Copyleaks found a 1-2% false positive rate. While that may not sound like a lot, it can mean that millions of students will be falsely accused of cheating with almost no way of defending themselves or receiving an explanation as to what they did wrong.

According to Bloomberg, these interactions are already ruining academic relationships between teachers and students. Some students have now taken to recording themselves writing their entire papers just to be able to disprove the algorithm. Others now obsess about not writing “too robotic” lest they be accused themselves, a problem that is especially prominent for ESL and neuro-divergent students. Should we hold the AI developer whose faulty product generates these kinds of results negligent?

Philosophers of science generally agree that researchers have an obligation to assess inductive risk concerns when accepting a conclusion. In other words, they need to consider what the moral consequences of potentially getting it wrong might be and then consider whether a higher or lower standard of evidence might be appropriate. If, for example, we were testing a chemical to determine how hazardous it is, but the test was only accurate 80% of the time, we would likely demand more evidence. Given the potential harm that can result and the opaqueness of algorithms, AI developers should be similarly conscientious.

If an algorithm operates according to black box principles, the developer may have a good understanding of how to create an algorithm – they will understand that the model can take in various inputs and translate those into outputs – but they will not be able to retrace the steps the model used to arrive at its conclusion. In other words, we have no idea what evidence an algorithm like GPTZero is relying on when it concludes that a piece of text is generated by AI. If the AI developer doesn’t know how the algorithm is using input data as evidence, they cannot evaluate the inductive risk concerns about how sufficient that evidence is.

Still, there are ways, despite the opacity, that an AI developer might attempt to address their inductive risk responsibilities. Koray Karaca argues that developers can build in inductive risk by using cost sensitive machine learning by assigning different costs to different kinds of errors. In the case of AI detectors, the company Turnitin claims to intentionally “oversample” underrepresented students (especially ESL students). By oversampling in this way, the evidentiary standard by which different forms of writing are judged is fine tuned.

Still, there is little accounting for what correlations a model might rely on, making it difficult to explain to students who do get falsely accused why they are being accused in the first place. AI developers have struggled to assess the reliability of their models or evaluate the risks when those correlations are used in error. This issue becomes especially concerning when it comes to things like credit reports. If you don’t know how or why a model compiles a credit report, how can you manage those risks of error? How much must a developer understand about how their algorithm functions before it is put to use? If a developer is aware of the risks of error but also knows that their algorithm is limited in terms of mitigating those risks, at what point do we consider that negligent behavior? If negligence is essentially something we police as a community, we will need to come together quickly to decide what the promise of AI can and can’t excuse.

“Technological Unemployment” and the Writers’ Strike

photograph of writers' strike sign

On September 27th the monthslong writers’ strike concluded. Writers worried that ChatGPT and similar text generation technologies will lead to a future in which scripts are no longer written by humans, but manufactured by machines. Certainly corporations find the technology promising, with Disney, Netflix, and other major entertainment companies vacuuming up AI specialists. Holding signs saying “AI? More like AI-Yi-Yi!” and expressing similar sentiments, writers fought to secure their place in a rapidly changing technological landscape. And when the smoke cleared and the dust settled, the Writers Guild of America had won a surprisingly strong contract. While it does not prohibit the use of AI, it does ensure that human writers will not become the mere handmaidens of computerized text generators – editing and refining machine-generated content.

From the flood of garbage ChatGPT has sent into the internet to the multiplying complexities of intellectual property, Artificial Intelligence is, in many ways, a distinctly modern challenge. But it also follows a well-worn pattern: it continues a long legacy (and revives an old debate) regarding the labor market’s ability to absorb the impact of new technology.

John Maynard Keynes, the famous British economist, used the phrase “technological unemployment” to describe the mismatch in how quickly human labor could be replaced by technological innovation versus how quickly new uses for human labor emerged. For Keynes, this was essentially a lag-time problem caused by rapid technological shifts, and it remains controversial whether “technological unemployment” causes an overall drop in the employment rate or just a momentary hiccup. Regardless, for workers who lose their jobs due to the adoption of new technology, whether jobs are being created just as fast in some other corner of the economy is rather beside the point. Because of this, workers are often anxious, even adversarial, when marked technological change makes an appearance at their workplace.

The most famous example is the Luddites, the British machine-smashing protestors of the early 1800s. With some textile manufacturers all too willing to use new technologies such as the mechanized loom to replace and undercut skilled laborers, workers responded by destroying these machines.

“Luddite” has since become a term to describe a broader resistance to (or ignorance of) technology. But the explanation that workers resistant to their employers adopting new technologies are simply anti-technology or gumming up the gears of progress out of self-interest is too simplistic. Has “progress” occurred just because there is a new product on the market?

New technology can have disparate effects on society, and few would assert that AI, social media, and smartphones deliver nothing but benefits. Even in cases where technological innovation improves the quality or eases the production of a particular good, it can be debatable whether meaningful societal progress has occurred. Companies are incentivized to simply pocket savings rather than passing on the benefits of technological advancement to their employees or customers. This represents “progress,” then, only if we measure according to shareholder value or executive compensation. Ultimately, whether technological advance produces societal progress depends on which particular technology we’re talking about. Lurking in the background are questions of who benefits and who gets to decide.

In part, the writers’ strike was over just this set of questions. Entertainment companies no doubt believe that they can cut labor costs and benefit their bottom line. Writers, however, can also benefit from this technology, using AI for editing and other purposes. It is the writers’ assertion that they need to be part of the conversation about how this technology – which affects their lives acutely – should be deployed, as opposed to a decision made unilaterally by company leadership. But rather than looking to ban or destroy this new technology, the writers were simply demanding guardrails to protect against exploitation.

In the same 1930 essay where he discussed technological unemployment –  “Economic Possibilities for our Grandchildren” – Keynes raised the hope of a 3-hour workday. The economist watched the startling increases in efficiency of the early 20th century and posited, naturally enough, that a glut of leisure time would soon be upon us. Why exactly this failed to materialize is contentious, but it is clear that workers, in neither leisure time nor pay, have been the prime beneficiaries of productivity gains.

As Matthew Silk observed recently in The Prindle Post, many concerns about technology, and especially AI, stem not from the technology itself but from the benefits being brought to just a few unaccountable people. Even if using AI to generate text instead of paying for writers could save Netflix an enormous amount of money, the bulk of the benefits would ultimately accrue to a relatively small number of corporate executives and major shareholders. Most of us would, at best, get more content at a slightly cheaper price. Netflix’s writers, of course, lose their jobs entirely.

One take on this is that it is still good for companies to be able to adopt new technologies unfettered by their workers or government regulations. For while it’s true that the writers themselves are on the losing end, if we simply crunch the numbers, perhaps shareholder gains and savings to consumers outweigh the firing of a few thousand writers. Alternatively, though, one might argue that even if there is a net societal benefit in terms of resources, this is swamped by harms associated with inequality; that there are attendant problems with a deeply unequal society – such as people being marginalized from the democratic political processes – not adequately compensated for merely by access to ever-cheaper entertainments.

To conclude, let us accept, for the sake of argument, that companies should be free to adopt essentially whatever technologies they wish. What should then be done for the victims of technological unemployment? Society may have a pat response blaming art majors and gender studies PhDs for their career struggles, but what about the experienced writing professional who loses their job when their employers decide to replace them with a large language model?

Even on the most hard-nosed analysis, technological unemployment is ultimately bad luck. (The only alternative is to claim that workers are expected to predict and adjust for all major technological changes in the labor market.) And many philosophers argue that society has at least some duty to help those suffering from things beyond their control. From this perspective, unemployment caused by rapid technological change should be treated more like disaster response and preparedness. It is either addressed after the fact with a constructive response like robust unemployment insurance and assistance getting a new job, or addressed pre-emptively through something like universal basic income (a possibility recently discussed by The Prindle Post’s Laura Siscoe).

Whatever your ethical leanings, the writers’ strike has important implications for any livelihood.

How to End ChatGPT Cheating Immediately and Forever

photograph of laptop and calculator with hands holding pen

How do we stop students from turning in essays written with the help of ChatGPT or the like? We cannot.

How do we stop students from cheating by using ChatGPT or the like to write their papers? We stop treating it as cheating.

It’s not magic. If we encourage students to use ChatGPT to create their papers, it won’t be cheating for them to turn in a paper created that way. In the long run, this may be the only solution to the great ChatGPT essay crisis.

Most teachers who rely on student essays as part of the learning process are in panic mode right now about the wide-spread availability of ChatGPT and other “Large Language Model” AIs. As you have probably heard by now, these LLMs can write passable (or better) essays – especially the standard short, five-page essay used in many classes.

In my experience, and based on what I have heard from other teachers, the essays currently written by LLMs tend to require some revisions to be passable. LLMs also have some blind spots that make unedited LLM papers suspect. For example, they shamelessly fabricate sources, often using real names and real journals, but cite articles that do not exist. With a little fixing up, however, LLM papers will usually do, especially if the student is happy with a B. And the quality of LLM generated essays is only going to get better.

There are many proposals out there about how to fight back. Here are two of my own. Multiple-choice questions are, according to social science research, just as valid and reliable as short-essay questions. As far as I can tell, LLMs are terrible at answering multiple-choice questions. And if you ask an LLM a question you want to use and it gets the answer right, you can either reword the question until the AI fails – or drop it. Another approach, that I have used in my applied ethics classes, is to replace the term paper with in-class debates. For all I know, some students are still using LLMs to write speeches, but it doesn’t really matter. In a debate, the student has to actively defend their ideas and explain why the other side is incorrect. What I care about is whether they really “get” the arguments or not. I think it is working beautifully so far.

Still, students have to learn to write papers. Period. So, what are we to do? Whenever there’s a panic over technological change, I always remember that Socrates and Plato were against the new technology of their time, too. They were against writing. For one thing, Socrates said (according to Plato) it will destroy people’s memories if they can just write things down. Of course, we only know about this because Plato wrote everything Socrates said down.

Prefer more recent examples? Digital pocket calculators were the scourge of grade school math teachers everywhere when I was a kid. By the time I got to high school, you were required to bring your calculator to every math class. At one university I was at, students were only allowed to use their laptops in class with special permission. Now, at my current school, all students are required to have a laptop and are usually encouraged to use it in class.

Essay writing will survive the rise of LLMs somehow. But how?

People are going to use whatever useful technology is available to use. So, as I said, we may as well encourage students to use LLMs to write, and think, better. Is it true that it is not cheating, if we simply cease to regard it as cheating?

It’s cheating to turn in a paper that you claim is your own work, if it isn’t. It’s not cheating where you have permission to work with someone, or an LLM, on it.

There are at least two important objections to this view and I will end by describing, but not necessarily settling, these.

One objection is that LLMs are trained by basically consuming most of the internet – along with input from human interlocutors. In other words, the massive amounts of data processed by any LLM is all the work of other people. There are serious concerns about whether this will stifle creativity in the long run. But our question is this: if you turn in a paper created with an LLM, isn’t the LLM contribution still plagiarism, since it’s mostly regurgitating stuff it stole from others, without regard to copyright or intellectual property rules?

I lack the expertise to settle this. But I do think that the way LLMs learn to write is not very different from the way I learned to write. I read stuff from other people and borrowed their style, their thoughts, and occasionally even their words. Even now, when I think I am being creative, I worry that I have just not read the earlier version of every single sentence I write – which is out there somewhere. I can only say that if LLMs are eventually regarded as inappropriately using material from other people, then I take back my proposal.

But here’s a more tractable objection, one I think I can answer. How should teachers respond to the fact that using an LLM will make it quicker and easier for students to do essays? Especially as the technology improves, it will be easier and easier to feed in a prompt or two and get a passable essay by doing next to nothing.

If we are going to allow the use of LLMs by students, there is one essential change we need to make in our approach to evaluating and grading student essays. We need to raise our grading standards. Raise them dramatically. (I am not the first person to suggest this.) If any student can get a passable essay by doing next to nothing, then with a little work – trying different prompts, editing and rewriting, etc. – they should be able to produce work on a whole new level.

Higher standards are not meant to be punitive. In fact, we may be entering a new era of quality writing. Just as a pocket calculator allows you do some calculations in a way that leaves you freer to do higher math, or being able to write down a grocery list frees up your memory for more important things (like remembering your passwords!), so having LLMs to create, and then using them to refine an essay, leaves you time to work on the argument and the quality of the writing more than ever. Some people worry that this leads student essayists to think less, but I would argue that, like so many technologies, by taking away some of the “grunt” work, it actually gives students more time to think. However, just as you can’t grade a student who does their times tables on the calculator on their phone the same as one who does it from memory, the standard essay produced by a student using an LLM should be held to a much higher standard.

With new technologies, you never know exactly what will be lost and what will be gained. But if a technology is bound to change the world, it’s probably better to work with it, rather than against it.

Who Should Own the Products of Generative AI?

droste effect image of tunnel depicted on laptop screen

Like many educators, I have encountered difficulties with Generative AI (GenAI); multiple students in my introductory courses have submitted work from ChatGPT as their own. Most of these students came to (or at least claimed to) recognize why this is a form of academic dishonesty. Some, however, failed to see the problem.

This issue does not end with undergraduates, though. Friends in other disciplines have reported to me that their colleagues use GenAI to perform tasks like writing code they intend to use in their own research and data analysis or create materials like cover letters. Two lawyers recently submitted filings written by ChatGPT in court (though the judge caught on as the AI “hallucinated” case law). Now, some academics even credit ChatGPT as a co-author on published works.

Academic institutions typically define plagiarism as something like the following: claiming the work, writing, ideas or concepts of others as one’s own without crediting the original author. So, some might argue that ChatGPT, Dall-E, Midjourney, etc. are not someone. They are programs, not people. Thus, one is not taking the work of another as there is no other person. (Although it is worth noting that the academics who credited ChatGPT avoid this issue. Nonetheless, their behavior is still problematic, as I will explain later.)

There are at least three problems with this defense, however. The first is that it seems deliberately obtuse regarding the definition of plagiarism. The dishonesty comes from claiming work that you did not perform as your own. Even tho GenAI is not a person, its work is not your work – so using it still involves acting deceptively, as Richard Gibson writes.

Second, as Daniel Burkett argues, it is unclear that there is any justice-based consideration which supports not giving AI credit for their work. So, the “no person, no problem” idea seems to miss the mark. There’s a case to be made that GenAIs do, indeed, deserve recognition despite not being human.

The third problem, however, dovetails with this point. I am not certain that credit for the output of GenAIs stops with the AI and the team that programmed it. Specifically, I want to sketch out the beginnings of an argument that many individuals have proper grounds to make a claim for at least partial ownership of the output of GenAI – namely, those who created the content which was used to “teach” the GenAI. While I cannot fully defend this claim here, we can still consider the basic points in its support.

To make the justification for my claim clear, we must first discuss how GenAI works. It is worth noting, though, that I am not a computer scientist. So, my explanation here may misrepresent some of the finer details.

GenAIs are programs that are capable of, well, generating content. They can perform tasks that involve creating text, images, audio, and video. GenAI learns to generate content by being fed large amounts of information, known as a data set. Typically, GenAIs are trained first via a labeled data set to learn categories, and then receive unlabeled data which they characterize based on the labeled data. This is known as semi-supervised learning. The ability to characterize unlabeled data is how GenAIs are able to create new content based on user requests. Large language models (LLMs) (i.e., text GenAI like ChatGPT) in particular learn from vast quantities of information. According to Open AI,  their GPT models are trained, in part, using text scraped from the internet. When creating output, GenAIs predict what is likely to occur next given the statistical model generated by data they were previously fed.

This is most easily understood with generative language models like ChatGPT. When you provide a prompt to ChatGPT, it begins crafting its response by categorizing your request. It analyzes the patterns of text found within the subset of its dataset that fit into the categories you requested. It then outputs a body of text where each word was statistically most likely to occur, given the previous word and the patterns observed in its data set. This process is not just limited to LLMs – GenAIs that produce audio learn patterns from data sets of sound and predict which sound is likely to come next, those that produce images learn from sets of images and predict which pixel is likely to come next, etc.

GenAI’s reliance on data sets is important to emphasize. These sets are incredibly large. GPT3, the model that underpins ChatGPT, was trained on 40 terabytes of text. For reference, 40 TB is about 20 trillion words. These texts include Wikipedia, online bodies of books, as well as internet content. Midjourney, Stable Diffusion, and DreamUp – all image GenAIs – were trained on LAION, which was created by gathering images from the internet. The essential takeaway here is that GenAI are trained on the work of countless creators, be they the authors of Wikipedia articles, digital artists, or composers. Their work was pulled from the internet and put into these datasets without consent or compensation.

On any plausible theory of property, the act of creating an object or work gives one ownership of it. In perhaps the most famous account of the acquisition of property, John Locke argues that one acquires a previously unowned thing by laboring on it. We own ourselves, Locke argues, and our labor is a product of our bodies. So, when we work on something, we mix  part of ourselves with it, granting us ownership over it. When datasets compile content by, say, scraping the internet, they take works created by individuals – works owned by their creators – compile them into data sets and use those data sets to teach GenAI how to produce content. Thus, it seems that works which the programmers or owners of GenAI do not own are essential ingredients in GenAI’s output.

Given this, who can we judge as the rightful owners of what GenAI produces? The first and obvious answer is those who program the AI, or the companies that reached contractual agreements with programmers to produce them. The second and more hidden party is those whose work was compiled into the data sets, labeled or unlabeled, which were used to teach the GenAI. Without either component, programs like ChatGPT could not produce the content we see at the quality and pace which they do. To continue to use Locke’s language, the labor of both parties is mixed in to form the end result. Thus, both the creators of the program and the creators of the data seem to have at least a partial ownership claim over the product.

Of course, one might object that the creators of the content that form the datasets fed to a GenAI, gave tacit consent. This is because they placed their work on the internet. Any information put onto the internet is made public and is free for anyone to use as they see fit, provided they do not steal it. But this response seems short-sighted. GenAI is a relatively new phenomenon, at least in terms of public awareness. The creators of the content used to teach GenAI surely were not aware of this potential when they uploaded their content online. Thus, it is unclear how they could consent, even tacitly, to their work being used to teach GenAI.

Further, one could argue that my account has an absurd implication for learning. Specifically, one might argue that, on my view, whenever material is used for teaching, those who produced the original material would have an ownership claim on the content created by those who learn from it. Suppose, for instance, I wrote an essay which I assigned to my students advising them on how to write philosophy. This essay is something I own. However, it shapes my students’ understanding in a way that affects their future work. But surely this does not mean I have a partial ownership claim to any essays which they write. One might argue my account implies this, and so should be rejected.

This point fails to appreciate a significant difference between human and GenAI learning. Recall that GenAI produces new content through statistical models – it determines which words, notes, pixels, etc. are most likely to follow given the previous contents. In this way, its output is wholly determined by the input it receives. As a result, GenAI, at least currently, seems to lack the kind of spontaneity and creativity that human learners and creators have (a matter D’Arcy Blaxwell demonstrates the troubling implications of here). Thus, it does not seem that the contents human learners consume generate ownership claims on their output in the same way as GenAI outputs.

I began this account by reflecting on GenAI’s relationship to plagiarism and honesty. With the analysis of who has a claim to ownership of the products created by GenAI in hand, we can more clearly see what the problem with using these programs in one’s work is. Even those who attempt to give credit to the program, like the academics who listed ChatGPT as a co-author, are missing something fundamentally important. The creators of the work that make up the datasets AI learned on ought to be credited; their labor was essential in what the GenAI produced. Thus, they ought to be seen as part owner of that output. In this way, leaning on GenAI in one’s own work is an order of magnitude worse than standard forms of plagiarism. Rather than taking the credit for the work of a small number of individuals, claiming the output of GenAI as one’s own fails to properly credit hundreds, if not thousands, of creators for their work, thoughts, and efforts.

Further still, this analysis enables us to see the moral push behind the claims made by the members of SAG-AFTRA and the WGA who are striking, in part, out of concern for AI learning from their likeness and work to mass produce content for studios. Or consider The New York Times ongoing conflict with OpenAI. Any AI which would be trained to write scripts, generate an acting performance, or relay the news would undoubtedly be trained on someone else’s work. Without an agreement in place, practices like these may be tantamount to theft.

ChatGPT and the Challenge of Critical (Un)Thinking

photograph of statue of thinking man

For the past weeks there has been a growing interest on ChatGPT, this new artificial intelligence language model that was “programmed to communicate with people and provide helpful responses.” I was one of the curious that had to try it and figure out why everyone was talking about it.

Artificial intelligence is not a new thing; at least as an idea it has some decades now, since it was firstly introduced in 1950 by Alan Turing, the British mathematician who is generally considered to be the father of computer science. Later on, in 1956, John McCarthy coined the term “artificial intelligence” in a conference, giving birth to a new field of study. Today, it is everywhere, we use it even without knowing and the advancements in the area create entirely new fields of inquiry, bringing along new ethical dilemmas that go from the discussion what (if any) moral rights to attribute to A.I., to designing new digital rights that encompass different milieus and that have political and legal consequences – see, for instance, the European Union attempts since 2021 to create a legal framework regarding the rights and regulations of AI for its use on the continent.

ChatGPT is something unique – at least for now. While a recent development, it seems almost too familiar – as if it was always there, just waiting to be invented. It is a Google search on steroids, with much more complexity in its answers and a “human” touch. Once you read the answers to your questions, what calls your attention is not only how fast the answer is provided, but also how detailed it seems to be. It mimics pretty well our ways of thinking and communicating with others. See, for instance, what happened when staff members at Vanderbilt University used it to write an email responding to the shooting at Michigan State – a well written 297-word missive which might otherwise have been well received. However, the fact that at the bottom of the email was a line that read as following: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” outraged the community. The Associate Dean of the institution soon apologized, saying that the use of the AI-written email contradicted the values of the institution. This is one (of no doubt many) examples of how the use of this technology may disrupt our social and cultural grids. This new tool brings new challenges, not only for education – how students and professors incorporate this technique into their practices – but also for ethics.

Contemporary models of education still rely heavily on regular evaluation – a common mission across educational institutions is to foster critical thinking and contribute to the development of active and responsible citizens. Why is critical thinking so valued? Because being reflective – thinking about the reasons why you act and think the way you do – is necessary for fully participating in our social world. Learning is a process through which we form our judgment and in doing so, build our moral identities – who we are and what we value. To judge something is not as easy as it may initially seem, for it forces each of us to confront our prejudices, compare it to reality – the set of facts common to all of us, what the world is made up – and take a stand. This process also moves us from our inner monologue with our self to a dialogue with others.

What happens when students rely more and more on ChatGPT to do their homework, to write their essays and to construct their papers? What happens when professors use it to write their papers or books or when deans of universities, like the example mentioned above, use it to write their correspondence? One could say that ChatGPT does not change, in essence, the practices already in place today, given the internet and all the search engines. But insofar as ChatGPT is superior in mimicking the human voice, might its greatest danger lie in fostering laziness? And shouldn’t we consider this laziness a moral vice?

In the Vanderbilt case, what shocked the community was the lack of empathy. After all, delegating this task to AI could be interpreted as “pretending to care” but fooling the audience. To many it seems a careless shortcut done for time’s sake. Surely it shows poor judgment; it just feels wrong. It seems to betray a lack of commitment to the purpose of education – the dedication to examine and think critically. In this particular context, technological innovation appears nothing more than a privileged means to erode what was supposed to contribute to, namely, thoughtful reflection.

While technologies tend to make our life much more comfortable and easier, it’s worth remembering that technologies are a means to something. As Heidegger well pointed out in an emblematic text entitled “The Question concerning Technology” (1954), we tend to let ourselves be charmed and hypnotized by its power; while forgetting the vital question of purpose – not the purpose of technology but the purpose of our lives, as humans. And while ChatGPT may be great for providing context and references on virtually any topic of research, we cannot forget that the experience of conscious thinking is what makes us uniquely human. Despite all appearances of coherent and well-ordered prose, ChatGPT is only mirroring what we, humans, think. It still does not have nor can mimic one thing: our emotions and our ability to respond in a singular manner to specific situations.

If we generalize and naturalize the use of this kind of technologies, incorporating into our daily lives, aren’t we making a choice of non-thinking in detriment of an instantaneous response that serves a strict utilitarian purpose? Heidegger says that “technology is a mode of revealing,” insofar what we choose (or do not choose) reveals the ways in which we are framing our world. And if we choose not to think – believing that something else can “mirror” our possible thought – aren’t we abdicating of our moral autonomy, suspending the human task of reflecting, comparing, and judging, and instead embracing a “dogmatic” product of a technological media?