As I’m writing this, ChatGPT is the world’s 8th most-visited website, with over 400 million weekly users. It’s become the global low-hanging fruit for anything and everything: from writing emails, meal planning, and travel itineraries to entire digital marketing campaigns and questions that could’ve been Googled.
And, like a journal that talks back, it can be used as a virtual therapist.
Perhaps we shouldn’t be surprised; mental health issues are at pandemic proportions, and globally, over half of those suffering from mental health issues have unmet needs when it comes to receiving adequate support. The treatment gap is an enormous issue.
That’s My Emotional Support ChatGPT
ChatGPT overcomes three major, long-known barriers to mental health support: it’s free, it’s accessible, and the support is around-the-clock. When therapy services have long waiting lists and huge fees, it feels snobbish and ableist to say no one should ever use AI as an alternative.
Effective results were published from the first clinical trial using AI to help patients with mental health disorders. A dedicated “Therabot” led to significant patient-reported reductions in depression symptoms, anxiety symptoms, and body image concerns.
There are very real and valid fears for the problems, present and future, that AI brings: environmental concerns, data privacy, ethics (i.e., stealing artists’ work), the looming threat to creative jobs, and countless other social and moral issues.
Having turned to ChatGPT for support during a sudden health issue, the guilt of that knowledge is a constant glimmer on my conscience. I did the free counselling (it’s $200+ per hour from here on out - no thank you), found the recommended forums, read the resources, journaled, talked to the right doctors, and leaned on people I trust. And yet, faced with long stretches of excruciating wait periods, none of it has even scratched the surface of my anxiety. In the midst of a personal shitstorm, I’m ashamed to admit I’ve found no better coping mechanism for obsessive speculation than the calm, ever-present support of my ChatGPT textbox.
I’m not the only 20-something leaning on AI in unexpected ways, so it seems. Open AI CEO Sam Altman recently described Gen Z users turning to ChatGPT as a general “life advisor”, expressing concern that many use it in a fashion where “...they don’t really make life decisions without asking ChatGPT what they should do.” Using AI as a crutch means investing less energy in learning, critical thinking, and problem-solving (spare a thought for anyone grading assignments digitally).
Currently, the biggest user group is 18-24 year-olds, followed by 25-34 year-olds. That’s most likely because it’s the age group most open to embracing new technology, but it also just so happens to be the group most likely to identify with mental health issues. OpenAI research revealed a strong correlation between heavy chatbot usage and loneliness, particularly with AI developers like Character.AI, who create characters designed like personal companions.
From a sweep over Reddit, it seems like people feel one of three ways about ChatGPT as a proxy for therapy: they love it, they’re disgusted by it, or they’ve simply never logged on. Users often cited their reasoning - therapy is expensive, the waits are long, they find social interaction difficult, or fear judgment. It’s common to see users write that they achieved more progress with ChatGPT than they could with therapy, and many others say ChatGPT is useless flattery, confusing therapy with validation.
As a language-learning model, ChatGPT responds to any prompt or journal entry with personalised responses engineered to be thoughtful, validating, and empathetic. It’s a place where grief and all-consuming thoughts can be voiced with anonymity and without judgment.
Unlike a psychologist’s office, the conversation can continue back and forth for as long as a person needs, free of charge. And unlike speaking to a friend, the responses are crafted from a deep and broad knowledge of psychology - or, at least, from internet psychology. Herein lies the issue.
Biases
AI is not a neutral machine; language models are only as good as they were trained.
Behind every algorithm and training dataset are critical choices made by flawed people and information. The data that is selected to train a given AI model can lead to over- or under-representation of certain perspectives based on the most common representations online, which flows on to impact the algorithm, and thus homogenises the decisions and responses that result, flattening out niches and facets in the human experience (there’s an excellent review paper that analyses these multiple biases).
Racial bias is woven into the very fabric of AI. Even with corrections to obvious biases, in its very DNA AI has inherited subtle biases that span gender, race, religion, sexual orientation and more. In a world that already caters to the domineering culture, often whiteness, this makes ChatGPT yet another tool underprepared for catering to marginalised and indigenous communities.
As quoted by Christina Janzer, an executive at Slack, "My hypothesis is that the people who are using it today are the people who are going to help shape the future of it. We want those people to be representative of our entire population. That's not what we're seeing today.” FYI, approximately 66% of ChatGPT users are male.
Women make up an estimated 22% of global AI talent, and less than 15% of executive leadership, showing a dramatically higher discrepancy in the AI industry than the general workforce.
If AI is the path to the future, the future is set for domination by the straight-white-male-world of Silicon Valley big tech bros.
And of course, OpenAI has removed their commitment to diversity statement in step with Trump’s executive orders, meaning they will not hold themselves to uphold diverse perspectives any time soon.
Real Connection > Smart technology
While ChatGPT may be used to bridge the mental health support gap, it is not a dedicated mental health tool. If ChatGPT says the wrong thing or makes a misdiagnosis, real lives are affected, and harm could ensue.
More importantly, it lacks the key ingredient in mental health support. Research shows healing often does not come from the specific words spoken or advice received, but rather it comes from the human connection that occurs with a therapist.
There is a nuanced connection and release that occurs, both consciously and subconsciously, when discussing problems with a real-life person. A therapist can probe into a patient’s flaws, ask the challenging question, and distinguish between delusion and reality. ChatGPT will not. Self-awareness is optional when your problems are analysed by a computer. Human rapport is not replaceable, and therapy is more than a bullet-pointed list of advice.
We should be fighting for investment in the effective resources we do have available, including helplines, counselling, and most importantly, our own community, rather than an inherently questionable AI tool.
In Summary
ChatGPT has arrived in time for the perfect storm: costs of living are at an all-time high, and typing a prompt into AI is free and consistent support. Somewhere I read that an actual therapist is better than AI, but AI is better than nothing at all, and perhaps that’s an idea with merit. This is an issue that exists in the spectrum of grey.
The overwhelming conclusion from the literature is that AI has potential, but is nowhere close to ready for use as a mental health intervention, if it should even be used at all.
The question we may need to ask ourselves is whether ChatGPT usage is driving us towards a better quality of life and real-world relationships, or away. Do we need yet another route for reliance on screen time to bridge our loneliness? Or do we want to live in a more well-connected, supportive society?
There is a balance here. We should be mindful of using a tool that’s damaging our environment, stealing creative ideas, and leading to over-reliance that damages our personal development. In an ideal world, we would not use ChatGPT. But when the conversation feels heavily directed to placing blame at an individual citizen level, we should always be wary of distracting from a much larger, multi-billion-dollar elephant of a problem (much like how BP popularised the concept of a ‘carbon footprint’ to distract from collective action against their greed).
Instead of pointing fingers at AI, I believe we should turn our ire to the system we’re in; one so broken, expensive, and lonely, people feel forced to turn to a language model for mental health support. That’s the real dystopia.
Who wrote this?
is a woman in STEM + a writer, from the land of the long white cloud.
Really enjoyed this piece.
I am trying to maintain an open-mind about the use of AI in therapy (I am a psychologist, but not practicing), because I do see the value in it being used as a form of triage, particularly when as this piece discusses, it is difficult to access treatment.
But these comments really stood out for me: "Unlike a psychologist’s office, the conversation can continue back and forth for as long as a person needs, free of charge." And: "A therapist can probe into a patient’s flaws, ask the challenging question, and distinguish between delusion and reality."
A crucial part of in-person therapy is leaving the room following a session where a client/patient reflects on what has been talked about. Sometimes there is homework involving specific tasks, but more often than not, it's the time between sessions when people can think more about what has been discussed. If an AI-therapist is constantly on hand, I wonder if people will become overly reliant on the tech, and there could also be detrimental effects with human to human relationships too.
This was such a thoughtful piece. I enjoyed it. You said something that stuck with me—I believe humanity is good, therefore ai can be good IF enough humans share our experiences, emotions, and stories. We have the power to shape ai. If the good among us don’t pitch in we run the risk of ai mirroring some of humanity’s flaws back to us. So it’s almost imperative that we interact with ai on a deep level, because it’s not going anywhere.