Mar 18, 2024

Could AI Help You Quit an Addiction?

About DFCM, Research

U of T researchers are developing a chatbot to help people stop smoking. One day, it might offer therapy, too

Illustration of man lying on a therapy couch made of speech bubbles. A cigarette carton lies under the couch.
Illustration by Pete Ryan.
By Kurt Kleiner for U of T Magazine

An artificial intelligence-driven chatbot being developed by U of T researchers could soon deliver therapy that will help people in Canada quit smoking.

In the medium term, the chatbot could help reduce the 45,000 premature deaths caused by smoking in Canada every year. Eventually, a program like it might provide more extensive forms of talk therapy as well, helping to meet the growing need for accessible therapy.

A 2018 survey found that 44 per cent of Canadians who said they needed mental health care had received either no care or less care than they felt they needed. “There’s such a shortage of help,” observes Jonathan Rose, a professor of computer and electrical engineering at the University of Toronto. “If you could have a good conversation anytime you needed it to help mitigate feelings of anxiety and depression, then that would be a net benefit to humanity and society,” he says.

Rose is working on the chatbot with a diverse team that includes addiction experts, computer engineers and social scientists. They recently reported the results of their work with their “MI Chatbot” in an academic paper in the journal JMIR Mental Health.

The chatbot uses a kind of artificial intelligence called a large language model. The system is trained on enormous amounts of written text, and learns to predict appropriate responses to questions or prompts.

The chatbot provides “motivational interviewing” – a kind of talk therapy that is often used to help smokers who have not yet decided to quit. In motivational interviewing, the therapist doesn’t offer advice, but instead asks questions and offers answers.

“It’s a psychological technique that creates the space for a non-judgmental conversation. It’s creating this safe atmosphere for people to explore their thoughts or feelings,” says Osnat Melamed, one of the researchers working on the chatbot. She is an assistant professor in U of T’s department of family and community medicine and a family and addictions physician at the Centre for Addiction and Mental Health.

The program asks five questions about what the patient likes and dislikes about smoking and what they would like to change and then, in some versions, paraphrases answers back. 1 To test the bot’s effectiveness, the researchers recruited 349 smokers online and surveyed them on their attitudes to smoking and quitting. Then they had the smokers use one of multiple versions of the program (one simply asked scripted questions; three used AI to create reflections).

    One week after the session, the researchers gave the participants another survey, and found that the smokers were more confident about their ability to quit – by between 1.0 and 1.3 points on an 11-point scale, depending on the version of the program used.

    Each program created some improvement, but the best results came from the versions that used AI to generate reflections based on what the smoker had said. “What we showed in the paper was that a pretty simple conversational agent that just asked some questions and provided some reflective answering to the responses was able to move some smokers, but not all, toward the decision to quit,” Rose says.

    Because even the best chatbots don’t “know” the meaning of what they are saying, the researchers had to make sure that the responses made sense and were not potentially harmful. For example, one version of the chatbot suggested a smoker might want to have a cigarette to relieve stress.

    The chatbot used a program called GPT-2, which OpenAI released more than four years ago. A more advanced version, GPT-4, powers OpenAI’s premium version of ChatGPT. Rose says initial tests using GPT-4 give appropriate reflections 98 per cent of the time, compared with about 70 per cent using the previous version. Nevertheless, there is still work to do before the anti-smoking chatbot can be deployed. Rose wants future versions to be able to have longer and more in-depth conversations.

    “I would hope that it gets deployed directly to the public as a public health measure,” says Peter Selby, a co-author of the paper, a professor in U of T’s department of family and community medicine and a senior scientist at CAMH. For instance, it could be provided free via a web page or a downloadable app, with information about how to follow up with a therapist. “Because we know the evidence is that for every two people who we can help quit smoking, one will not die from a smoking-related cause,” he says.

    The researchers also need to make sure they have thoroughly considered the chatbot’s potential benefits and harms. “To act ethically, the rewards need to outweigh the risks,” says Rose. “The reward is the potential of an intervention that could reduce the mortality and danger to many more people than is currently possible.”

    Experts estimate there are thousands of self-care apps available – many of them powered by AI – for everything from exercise to diet to anxiety.

    The increasing number of mental health chatbots raises several ethical and philosophical questions, says Rachel Katz, a PhD student in the history and philosophy of science who is studying the issue. In some cases, companies have been accused of allowing users to think that the AI chatbot is an actual human being, she says.

    Even when people know they are talking to a chatbot, companies will sometimes claim that the chatbot can get to “know” each user. “That sort of implies the existence of another mind or another brain on your therapeutic journey with you. And there isn’t,” Katz says. Although she thinks chatbots will likely have some therapeutic uses, she is concerned they could become a cheaper, second-best therapy that substitutes for better human therapy.

    Philosophically, Katz is interested in how the therapeutic relationship with a human differs from that with a chatbot. She is exploring the idea that you can have a relationship of trust with a human, but only of reliance on a chatbot. “I characterize this as being like the difference between human interaction and leaning on a brick wall. The wall is there, it can be supportive, it has certain desirable qualities for the patient, but it’s not helping someone stand back up again,” Katz says.

    Rose says that an AI capable of replacing human therapists entirely may not ever be possible. However, he thinks chatbots that provide some kinds of therapy, and that are used in conjunction with real therapists, are promising.

    Matt Ratto, a professor at U of T’s Faculty of Information, and a co-author on the chatbot paper, notes that chatbots don’t necessarily have to work in the same way as humans to be effective. “I don’t think the question should be, ‘Can the bot do as well as a human can?’ The real question is, ‘How would the bot do it differently in ways that still have positive clinical impact?’” he says.

    Reposted with permission from U of T Magazine.