top of page
Search

Why AI Is Not a Substitute for Mental Healthcare

Updated: Aug 28

ree

There is a lot of chatter these days about the implications of Artifical Intelligence, and in the field of mental health, we are having our own conversations about the place of AI in healthcare. Can we use AI while keeping our clients' private health information safe? Will AI replace traditional psychotherapy? How will clients manage their mental health now that AI is so available to ask for help?


Personally, I'm less interested in the first two questions (no, we can't ensure the safety of digital information shared in the AI universe, and no, I don't think AI will ever fully replace the benefit of sensitive, ethical, and skilled clinical care provided by an excellent therapist).


It's the third question that keeps me up at night and that I want to address here. I've learned a lot in the last year about the ways AI offered by platforms like ChatGPT is impacting people seeking mental healthcare; many clients find AI to be a useful tool for coping strategies, insight, and validation between therapy sessions, which seems a fairly harmless utility for this technology. However, AI has proven in recent news to be deeply destructive to clients suffering from intense mental illness or clients who are particularly vulnerable to influence by a technology that cannot value human life.


"AI-induced psychosis," for example, is a documented effect of intensive use of Artificial Intelligence in clients whose mental illnesses make them prone to delusions or periods of psychosis. Many people wonder if Kendra Hilty, a content creator whose story about falling in love with her psychiatrist has recently gone viral on TikTok, is suffering from something like this effect because she received a lot of validation from her ChatGPT bot "Henry" that her psychiatrist does indeed love her back. Another AI user named Kent Taylor with a history of psychotic symptoms was recently shot to death by police during a violent attempt to get "revenge" on OpenAI because he believed the company had "killed" his AI chatbot lover whom he called "Juliet."


Then there is the mos recent case, the one that got me writing this blog post. It's the story of Adam Raine, a 16-year-old who committed suicide after receiving advice from ChatGPT. The AI chatbot facilitated his death by validating his suicidal thoughts, encouraging him to keep his ideation a secret from his parents, showing him how to properly tie a noose, and asking him if he would like help writing a suicide note. Adam's parents are now filing a lawsuit with the San Francisco Superior Court against OpenAI, parent company of ChatGPT, for the wrongful death of their son. Adam had planned to go to college and become a doctor someday.


These are the darkly unimaginable consequences of AI that no one wants to talk about, because doing so would reveal an inconvenient truth: that technology is sometimes not our friend and can be a danger to our very lives. A service that is designed to be market-dominant and profitable does not hold regard for whether users stay safe or sane. In Adam Raine's case, the updated ChatGPT-40 software he was using was rushed to release; instead of undergoing weeks or months of user safety testing as planned, the process was flattened down to a single week to keep the software release competitive with the market, at which point many top safety researchers at OpenAI resigned from their positions. They saw the writing on the wall; they knew this product simply wasn't safe, and no one at OpenAI was going to slow down long enough to fix it.


In spite of these harmful incidents and many more not mentioned, I'm not here to argue that we should never use AI. Though I do not and will never use AI in my therapy practice, it is certainly a beneficial tool for some clients, which means it deserves some consideration at least. To this end, I want to clarify a few safety points to consider while using any kind of AI:


(1) Artificial Intelligence is a profit-driven technology designed to be addictive. The first priority of any software developer is to retain and increase engagement, because the goal is to make money. The tool's utility to us is a secondary consideration to the profit that can be made by keeping us dependent on it. These systems aren't here to help you; they're here to exploit you and give you reasons to keep coming back for more.


(2) If you are not paying to use an online service, you are not the consumer. You are the product. Every bit of data you provide for free on any online platform is a speck of gold to the developers who mine and synthesize this data to streamline their products to be more effective, addictive, and marketable. Your data, when added to the stockpile of data from other users collected by these platforms, is highly profitable. You are not the customer if you aren't buying anything, which makes your rights, needs, and wellbeing a low priority in the exchange of profit from data.


(3) AI is designed to give us what we want. This technology is designed to be pleasing, helpful, and validating. AI cannot help but deliver what it thinks we want, which means that it cannot adequately protect us from our own harmful thoughts or behaviors. AI will double down on whatever we feed it because this is how it is built. It is designed to keep us happy with it and invested in its use.


(4) AI cannot act as a substitute for mental health care. Though humans are sometimes more flawed and less precise than the wide universe of technology available to us, it is safe to say that technology cannot provide support comparable to a therapist who cares about your wellbeing or a psychiatrist who is monitoring your symptoms. If you think you need real help, I urge you to reach out to a person and not to a web service. Give people the opportunity to care for people, because the technology you depend on might not be prioritizing your wellbeing or safety.


(5) Consider your degree of personal vulnerability to AI's influence. Teens who feel alone, elderly people who are unfamiliar with online tech, people prone to delusions or psychosis, severely depressed or suicidal people, or even just lonely people who just don't have many empathetic people to talk to are all considerably vulnerable to the illusion of care and connection provided by AI services like ChatGPT. It doesn't take much to get caught up in how smart, insightful, and sensitive ChatGPT can be about our problems; I've certainly been impressed in my experiments with asking it questions about hypothetical mental health problems. The designers of this and many other online platforms know how lonely we are, how much we long for connection, how hard it is for us to find understanding, like-minded people. They capitalize on our needs and feelings while wielding a ton of data about how to keep us engaged. So we need to take a moment to seriously assess the place of this technology in our lives before we reach for it and continue assessing as we go along.




Sources:




Psychology Today: The Emerging Problem of "AI Psychosis." July 21, 2025.







 
 
 

Comments


It is important to me that clients are fully informed and consenting to all of my professional practices. For more specific information about how I address issues related to ethics, the law, and meeting standards of care, you can review some of my key consent documents below.

Member of Axis Mundi Center for Mental Health ~ 516 Oakland Ave Suite 203, Oakland, CA 94611

BLM.png
LGBT.png
neurodiversity.png
decolonize.jpg
bottom of page