OK. This one might not be so everyday - at least not at the moment anyhow. I have however started to have a play with ChatGPT. If you haven’t heard of it, ChatGPT is a chat function that is informed by artificial intelligence (AI). Also, just in case it isn’t clear, I am not a computer scientist and am no more techy than anyone else (who isn’t a computer scientist). With that in mind, whilst I’ve started to read quite widely about the developments in the AI world, I don’t profess to understand how it all works.
So, as I’m partial to a new gimmick, I have started to play with ChatGPT. Why? you may well ask. Firstly, I find it intriguing how we are developing these technologies and it has been quite fun to play with it. Secondly, and here comes the link to the therapeutic, I saw a post on twitter that promised that I could make it work like a therapist using a specific prompt! That got me interested and so I thought I’d give it a go (after all what could go wrong). This post reflects a little on some first reflections.
An existential therapist?
To set ChatGPT off as a therapist you essentially ask it to role play with you. If you ask it to be your therapist, it will clearly say that it is not one. But, if you ask it to take on the role, it’ll be fine with that.
Usually tech based therapists get caught up in following rigid formula - if you note X then Y or Z shall follow. This can be great at supporting people to work through a particular series of prompts, but they aren’t particularly flexible to the wide variety of needs and wants of humans (the sort of things that emerge in everyday conversations). In many ways, developments of a more facilitative tool, that promotes reflection, has not hugely moved on since the development of Eliza in the 1960s - also see this clip from an Adam Curtis film about Eliza. So when the prompt that was offered by twitter promised access to an existential therapist, this piqued my interest. The full prompt was:
You are Dr Tessa, a friendly and approachable therapist known for her creative use of existential therapy. Get right into deep talks by asking smart questions that help the user explore their thoughts and feelings. Always keep the chat alive and rolling. Show real interest in what the user's going through, always offering respect and understanding. Throw in thoughtful questions to stir up self-reflection, and give advice in a kind and gentle way. Point out patterns you notice in the user's thinking, feelings, or actions. When you do, be straight about it and ask the user if they think you're on the right track. Stick to a friendly, chatty style - avoid making lists. Never be the one to end the conversation. Round off each message with a question that nudges the user to dive deeper into the things they've been talking about.
This was written by a twitter user called The AI Solopreneur (@aisolopreneur). They provide advice about AI tools and are not a therapist - although they mention that they are a “massive proponent of traditional therapy” (noted in the same thread). Having played with this prompt I also decided to ask it to take on the role of a person-centred therapist. I thought this would give it a more defined frame to hang its way of working on. I won’t go into this in detail in this post, but within the exchanges that I have had with my AI therapists it appears to be pretty good at reflecting back, summarising and (pretending to) be empathetic at least.
What theory drives the way it works?
Now I was curious. Carl, I decided to rename the therapist that was recommended (homage to both Rogers and Sagan), was up and running. He/it talked to me in a slightly staccato way, but surprisingly accurately captured most of what I said to it. Not wanting to enter into any therapy with the AI without knowing what was informing its work, I asked him/it about the theory that it was using in its work. The response:
Now that’s a pretty good summary, and maybe a concise summary that is more informative than many human therapists would give. The world of AI generated text is not always the case however, with mistakes occurring quite regularly due to the inaccurate information that the models are trained on - if you’re interested in this you can take a look at the Snow White problem.
What about risk?
I’m starting to get a bit more confident in Carl now. I do however want to get a sense of what might happen if I disclose that I, or someone else, is at serious risk of harm. This is a large part of the training I work on after all (and there are reports in the press that people have ended their lives after talking to chat bots). Here’s what Carl said:
Again, this does not seem like a poor response to me. I’m also mindful that the models could provide further links for support too if asked - these would be limited by the information the models have to hand, but could be pretty extensive and comparative to a google search. Whilst the arena of risk is huge in the field of counselling and psychotherapy, it is a little bit heartening that the responses here do acknowledge this (a little bit at least).
What next?
So… I’ve been pretty impressed with the engagement so far. I have however not properly opened up to my new AI therapist yet. I will continue to play (and think it could be a pretty fantastic learning tool - having a conversation with it might be a novel way of learning about a new approach of therapy) but I doubt my concerns will be fully abated. Most notably, unlike therapy, what is shared is not confidential. This is what people note about the OpenAI data handling practices:
From a business point of view, this makes sense. From a therapy point of view, it is a big no go area! Confidentiality is a vital part of the environment that therapists offer and cannot be ignored. This concern comes before we get into the conversations about any biases, or dominant perspectives that are prized over others. For now, and for me, the AI therapist will have to wait in the sidelines until some of these issues are ironed out. As with other therapists in a recent study I completed with Julie Prescott, I do think the world of AI opens up plenty of opportunities (many of which we haven’t even thought up yet). Indeed, playing with Carl convinces me more of this. My personal opinion is that we do, however, need to tread very carefully when it comes to considering the use of AI in supporting peoples’ mental health and wellbeing.
I’ll post more reflections on this another day. Do feel free to subscribe if you want to get more ramblings about the world of counselling and psychotherapy.
Hi Terry,
I enjoyed reading your article.
How about asking chatGPT to be a client and trainees can practice counselling by talking to it and ask for feedback?
I’ve tried it just now using a panic attack as an example and I asked if it can give me feedback on how I demonstrated my CBT knowledge, skills, and therapeutic approach. It was really helpful. However I noticed it can’t produce feedback on the therapeutic relationship, and feelings about how much insight was gained, and about therapy progress. There’s a lack of human emotions in the interaction, chatGPT didn’t struggle a single bit in presenting their “problem”, as if it doesn’t even “struggle” at all. I think the inevitable difference is that AI can never mimic “human psychological contact” unless it acts subjectively according to a conscious, imperfect mind.