← Return to search results
Back to Prindle Institute

What’s Wrong with AI Therapy Bots?

image of human and chatbot dialog

I have a distinct memory from my childhood: I was on a school trip, at what I think was the Ontario Science Centre, and my classmates and I were messing around with a computer terminal. As this was the early-to-mid 90s the computer itself was a beige slab with a faded keyboard, letters dulled from the hunt-and-pecking of hundreds of previous children on school trips of their own. There were no graphics, just white text on a black screen, and a flashing rectangle indicating where you were supposed to type.

The program was meant to be an “electronic psychotherapist,” either some version of ELIZA – one of the earliest attempts at what we would now classify as a chatbot – or some equivalent Canadian substitute (“Eh-LIZA”?). After starting up the program there was a welcome message, after which it would ask questions – something like “How are you feeling today?” or “What seems to be bothering you?” The rectangle would flash expectantly, store the value of the user’s input in a variable, and then spit it back out, often inelegantly, in a way that was meant to mimic the conversation of a therapist and patient. I remember my classmate typing “I think I’m Napoleon” (the best expression of our understanding of mental illness at the time) and the computer replying: “How long have you had the problem I think I’m Napoleon?”

30-ish years later, I receive a notification on my phone: “Hey Ken, do you want to see something adorable?” It’s from an app called WoeBot, and I’ve been ignoring it. WoeBot is one of several new chatbot therapists that tout that they are “driven by AI”: this particular app claims to sit at the intersections of several different types of therapy – cognitive behavioral therapy, interpersonal psychotherapy, and dialectical behavior therapy, according to their website – and AI that is powered by natural language processing. At the moment, it’s trying to cheer me up by showing me a gif of a kitten.

Inspired by (or worried they’ll get left behind by) programs like ChatGPT, tech companies have been chomping at the bit to create their own AI programs that produce natural-sounding text. The lucrative world of self-help and mental well-being seems like a natural fit for such products, and many claim to solve a longstanding problem in the world of mental healthcare: namely, that while human therapists are expensive and busy, AI therapists are cheap and available whenever you need them. In addition to WoeBot, there’s Wysa – also installed on my phone, and also trying to get my attention – Youper, Fingerprint for Success, and Koko, which recently got into hot water by failing to disclose to its userbase that they were not, in fact, chatting with a human therapist.

Despite having read reports that people have found AI therapy bots to be genuinely helpful, I was skeptical. But I attempted to keep an open mind, and downloaded both WoeBot and Wysa to see what all the fuss was about. After using them for a month, I’ve found them to be very similar: they both “check in” at prescribed times throughout the day, attempt to start up a conversation about any issues that I’ve previously said I wanted to address, and recommend various exercises that will be familiar to those who have ever done any cognitive behavioral therapy. They both offer the option to connect to real therapists (for a price, of course), and perhaps in response to the Koko debacle, neither hides the fact that they are programs (often annoying so: WoeBot is constantly talking about how its friends are other electronics, a schtick that got tired almost immediately).

It’s been an odd experience. The apps send me messages saying that they’re proud of me for doing good work, that they’re sorry if I didn’t find a session to be particularly useful, and that they know that keeping up with therapy can be difficult. But, of course, they’re not proud of me, or sorry, and they don’t know anything. At times their messages are difficult to distinguish from those of a real therapist; at others, they don’t properly parse my input, and respond with messages not unlike “How long have you had the problem I think I’m Napoleon?” If there is any therapeutic value in the suspension of disbelief then it often does not last long.

But apart from a sense of weirdness and the occasional annoyances, are there any ethical concerns surrounding the use of AI therapy chatbots?

There is clearly potential for them to be beneficial: your stock model AI therapist is free, and the therapies that they draw their exercises from are often well-tested in the offline world. A little program that reminds you to take deep breaths when you’re feeling stressed out seems all well and good, so long as it’s obvious that it’s not a real person on the other side.

Whether you think the hype about new AI technology is warranted or not will likely impact your feelings about the new therapy chatbots. Techno-optimists will emphasize the benefit of expanding care to  many more people than could be reached through other means. Those who are skeptical of the hype, however, are likely to think that spending so much money on unproven tech is a poor use of resources: instead of sinking billions into competing chatbots, maybe that money could be spent on helping a wider range of people access traditional mental health resources.

There are also concerns about the ability of AI-driven text generators to go off the rails. Microsoft’s recent experiment with their new AI-powered Bing search had an inauspicious debut, occasionally spouting nonsense and even threatening users. It’s not hard to imagine the harm such unpredictable outputs could cause for someone who relied heavily on their AI therapy bot. Of course, true believers in the new AI revolution will dismiss these worries as growing pains that inevitably come along with the use of any new tech.

What is perhaps troubling is that the apps themselves walk a tightrope between trying to be a sympathetic ear, and reminding you that they’re just bots. The makers of WoeBot recently released research results that suggest that users feel a “bond” with the app, similar to the kind of bond they might feel with a human therapist. This is clearly an intentional choice on the part of the creators, but it brings with it some potential pitfalls.

For example, although the apps I’ve tried have never threatened me, they have occasionally come off as cold and uninterested. During a recent check-in, Wysa asked me to tell it what was bothering me that morning. It turned out to be a lot (the previous few days hadn’t been great). But after typing it all out and sending it along, Wysa quickly cut the conversation short, saying that it seemed like I didn’t want to engage at the moment. I felt rejected. And then I felt stupid that I felt rejected, because there was nothing that was actually rejecting me. Instead of feeling better by letting it all out, I felt worse.

In using the apps I’m reminded of a thought experiment from philosopher Hilary Putnam. He asks us to consider an ant on a beach who, through its search for food and random wanderings, happens to trace out what looks to be a line drawing of Winston Churchill. It is not, however, a picture of Churchill, and the ant did not draw it, at least in the way that you or I might. However, at the end of the day a portrait of Winston Churchill consists of a series of marks on a page (or on a beach), so what, asks Putnam, is the relevant difference between those made by the ant and those made by a person?

His answer is that only the latter are made intentionally, and it is the underlying intention which gives the marks their meaning. WoeBot and Wysa and other AI-powered programs often string together words in ways that look indistinguishable from those that might be written down by a human being on the other side. But there is no intentionality, and without intentionality there is no genuine empathy or concern or encouragement behind the words. They are just marks on a screen that happen to have the same shape as something meaningful.

There is, of course, a necessary kind of disingenuousness that must exist for these bots to have any effect at all. No one is going to feel encouraged to engage with a program that explicitly reminds you that it does not care about you because it does not have the capacity to care. AI therapy requires that you play along. But I quickly got tired of playing make believe with my therapy bots, and it’s overall become increasingly difficult for me to find the value in this kind of ersatz therapy.

I can report one concrete instance in which using an AI therapy bot did seem genuinely helpful. It was guiding me through an exercise, the culmination of which was to get me to pretend as though I were evaluating my own situation as that of a friend, and to consider what I would say to them. It’s an exercise that is frequently used in cognitive behavioral therapy, but one that’s easy to forget to do. In this way, the app checking-in did, in fact, help: I wouldn’t have been as sympathetic to myself had it not reminded me to. But I can’t help but think that if that’s where the benefits of these apps lie – in presenting tried-and-tested exercises from various therapies and reminding you to do them – then the whole thing is over-engineered. If it can’t talk or understand or empathize like a human, then there seems to be little point in there being any artificial intelligence in there at all.

AI therapy bots are still new, and so it remains to be seen whether they will have a lasting impact or just be a flash in the pan. Whatever does end up happening, though, it’s worth considering whether we would even want the promise of AI-powered therapy to come true.

Re-evaluating Addiction: The Immoral Moralizing of Alcoholics Anonymous

photograph of church doorway with chairs arranged

As of 2019, Alcoholics Anonymous boasts more than 2 million members across 150 countries, making it the most widely implemented form of addiction treatment worldwide. The 12-step program has become ubiquitous within medical science and popular culture alike, to the extent that most of us take its potency for granted. According to Eoin F. Cannon’s The Saloon and the Mission: Addiction, Conversion, and the Politics of Redemption in American Culture, A.A. has “spread its ideas and its phraseology as a natural language of recovery, rather than as a framework with an institutional history and a cultural genealogy . . . A.A.’s deep cultural penetration is most evident in the way the recovery story it fostered can convey intensely personal, experiential truth, largely free from the implications of persuasion or imitation that attached to its precursors.” And yet, medical science continues to debate the effectiveness of A.A., or if it’s even effective at all. Critics have pointed out that the organization’s moral approach to suffering and redemption leaves much to be desired for many addicts.

It’s worth beginning with a basic overview of the social and historical context of A.A. The organization has its roots in the Oxford Group, a fellowship of Christian evangelical ministers who believed in the value of confession for alleviating the inherent sinfulness of humanity. Bill Wilson, who would go on to co-found Alcoholics Anonymous in 1935, was a member of this group, and based many of the founding principles of his organization on the teachings of the Oxford Group. A.A. was also rooted in a much broader historical moment. As Cannon explains, “A.A. embraced the disease concept of alcoholism in an era of rising medical authority and popular psychology. It formulated a spirituality that used the language of traditional Christian piety but was personal and pragmatic enough to sit comfortably with postwar prosperity.” Also crucial was “the evangelical energies and professional expertise of its early members, many of whom were experienced in marketing and public relations.” A.A.’s marketing was so effective at embedding the organization in popular culture that virtually all depictions of addiction and recovery have been colored by the 12-steps-approach, even into the 21st century.

Furthermore, the Great Depression was ending as the group achieved national prominence, and its philosophy was closely aligned with that of the New Deal. As Cannon explains, the pain of the economic crisis (which was characterized by contemporaries as a kind of drunken irresponsibility) was transformed into an opportunity for a moral makeover, a narrative pushed by FDR and the New Deal that A.A. either seized upon or unconsciously imitated. Cannon explains that “recovering narrators described their experiences of decline and crisis drew on the same kind of social material that, writ large, defined the national problem: the bewildering failure of self-reliant individualism, as evidenced in job loss, privation, and family trauma. A.A. narrative, just like FDR’s New Deal story, interpreted this failure as a hard-earned lesson about the limits of self-interest.” In this sense, A.A. is hardly apolitical or ahistorical. It was forged by political and economic currents of the early 20th century, and its ascendancy was hardly natural or inevitable.

The spiritual dimension of A.A. is impossible to ignore. The Oxford Group’s foundational influence is evident in the famous the 12-step program: for example, steps 2 and 3 read,

“2. Came to believe that a Power greater than ourselves could restore us to sanity.

3.. Made a decision to turn our will and our lives over to the care of God as we

understood Him.”

The final step, “Having had a spiritual awakening as the result of these Steps, we tried to carry this message to alcoholics, and to practice these principles in all our affairs,” sounds like a call for religious conversion. Most would agree that medical treatment should be secular, so why is alcoholism an exception?

Furthermore, an emphasis on spirituality doesn’t necessarily make addiction treatment more effective. A 2007 study conducted for the Journal for the Scientific Study of Religion acknowledges that “Studies focusing on religiosity as a protective factor tend to show a weak to moderate relationship to substance use and dependence . . . Studies that have examined religiosity as a personal resource in treatment recovery also tend to report weak to moderate correlations with treatment.” However, the 2007 study takes issue with this data. The researchers argue that most previous studies rely “on the assumption that religiosity, although an outcome of socialization, is an internal attribute that functions as a resource to promote conventional behavior . . . An alternative model to this individualistic, psychological framework is a sociological model where religion is viewed as an attribute of a social group.” In other words, we focus too much on how religion functions for individuals instead of how religion functions in a social context.

Rather, this study uses the “moral community” hypothesis, first articulated by sociologists Stark and Bainbridge, as a framework for understanding addiction treatment. This theory argues that individual interactions with religion (how much importance you place on it or specific beliefs you subscribe to) are not as important as your entrenchment in a religious community, which is the ultimate predictor of long-term commitment to and effectiveness of treatment. The results of the 2007 study seem to support this idea; the data “revealed that an increase in church attendance and involvement in self-help groups were better predictors of posttreatment alcohol and drug use than the measure of individual religiosity.” The study found that “individuals with higher levels of religiosity tended to have higher levels of commitment” to AA, but more broadly, “in some programs religiosity functioned as a positive resource whereas in other programs it served as a hindrance to recovery.” In other words, religion isn’t universally helpful, depending on the person and how easy they find it to assimilate into their moral community. Perhaps those who already have incorporated organized religion into their life will be better equipped for group participation in the context of addiction recovery. What all of this seems to suggest is that A.A. is only effective if you’re already receptive to its framework, which hardly makes it a cure-all for alcoholism. Non-Christians and atheists who drink are more or less left out in the cold.

In fact, there are very few studies that conclusively support A.A. as the best or only treatment plan for alcoholism. As writer Gabrielle Glaser pointed out in an article for The Atlantic, “Alcoholics Anonymous is famously difficult to study. By necessity, it keeps no records of who attends meetings; members come and go and are, of course, anonymous. No conclusive data exist on how well it works.” The few studies that have tested A.A.’s effectiveness tend to find less than impressive results. For example, psychiatrist Lance Dodes estimated in his 2015 book The Sober Truth: Debunking the Bad Science Behind 12-Step Programs and the Rehab Industry that the actual rate of success for the A.A. program is somewhere between 5 and 8 percent, based on retention rates and long-term commitment. As of 2017, there are 275 research centers devoted to studying alcohol addiction worldwide. The majority of research is conducted in multi-disciplinary research institutions, and nearly half of all research on alcoholism comes out of the U.S, which given how prominent the A.A. approach is here, may skew what facets of addiction are given attention by researchers.

Despite a dearth of proof, A.A. claims to have a 75% percent success rate. According to the movement’s urtext Alcoholics Anonymous: The Story of How Many Thousands of Men and Women Have Recovered from Alcoholism (affectionately referred to as “The Big Book” by seasoned A.A. members),

“Rarely have we seen a person fail who has thoroughly followed our path. Those who do not recover are people who cannot or will not completely give themselves to this simple program, usually men and women who are constitutionally incapable of being honest with themselves . . . They are not at fault; they seem to have been born that way.”

While alcoholism can have a genetic component, the idea that some people are simply doomed to be incurable because of the way they were born (or that any treatment plan for addiction can be “simple”) is deeply troubling. Reading this passage from the Big Book, one can’t help but notice a parallel to early 20th-century eugenicists like Walter Wasson, who argued in 1913 that alcoholics (who he labeled “mental defectives”) should be “segregated and prevented from having children” so as not to pass down their condition and further pollute the gene pool. Eugenicists believed that alcoholism was incurable, and while A.A. ostensibly believes that it can be cured, they still believe that some are genetically destined to always drink. If their treatment plan for you doesn’t work, it’s simply your own fault, and you’ll never be able to get help at all.

Since its post-Depression inception, A.A. has relied on a moral framework that places blame on the individual rather than society at large. Alcoholism is understood as an innate failure of the individual, not a complex condition brought about by a number of economic, social, and genetic factors. As one former A.A. member explained,

“The AA programme makes absolutely no distinction between thoughts and feelings – a key factor in cognitive behavioural therapy, which is arguably a more up-to-date form of mental health technology. Instead, in AA, alcoholism is caused by ‘defects of character,’ which can only be taken away by surrender to a higher power. So, in many ways, it’s a movement based on emotional subjugation . . . anything you achieve in AA is through God’s will rather than your own. You have no control over your life, but the higher power does.”

Many individuals have found comfort and support in A.A., but it seems that the kind of moral community it offers is only accessible to those with a religious bent and predisposition to the treatment plan. For those who drink to escape crushing poverty, racial inequality, or the drudgery of capitalism, A.A. often offers pseudoscience instead of results, moralizing condemnation instead of medical treatment and genuine understanding.

A New Approach to Pedophilia

Few crimes are as stigmatized as those that stem from pedophilia. Pervasive tropes of the pedophile as the serial child abuser, shadily lurking in public parks, have worked to demonize the mindset to a degree rarely seen with other crimes. This stigma is so strong that, in states like California, therapists must report clients who admitted that they viewed child pornography, regardless of their attempt to seek treatment. Such measures may seem like a strong stand against pedophilia, a mindset that contributes to the abuse and exploitation of thousands of children. But is criminalizing pedophilia in this manner effective?

Continue reading “A New Approach to Pedophilia”