I’m a Psychiatrist. Here’s What ChatGPT Can (and Can’t) Do for Your Mental Health

“If I’m not able to reach you, can I ask ChatGPT what to do?”

It’s a question that’s coming up more often in my outpatient psychiatry practice. It’s also a question I regularly consider as I lead work on assessing how digital health, including large language models (LLMs), shape mental health and well-being.

OpenAI recently reported that ChatGPT has 800 million weekly active users and that the company is working to better recognize and support people in distress. While people most commonly use LLMs for mental health as a non-judgmental ear for emotional support, they’re also using it for self-diagnosis. For instance, when someone has a medical question but needs to wait for a non-urgent medical appointment, online information could have a role in providing guidance on next steps.

Is it okay to use ChatGPT to fill the gap? Are LLMs an accurate or reliable tool for this purpose when it comes to mental health?

[READ How AI Is Transforming Medicine and Patient Care]

Current Research on AI and Psychiatric Care

The evidence is limited. In medicine, we judge whether digital health tools are safe and effective in clinical practice based on evidence, but there’s limited high-quality research to answer the question when it comes to AI’s role in psychiatric care.

For instance, randomized controlled trials are the gold standard to study new treatments. A randomized control trial assigns participants to a test intervention and a control to compare outcomes. However, we still don’t have enough such studies or even systematic comparisons between LLMs to psychiatrists and how the two align in their responses to real-world mental health questions. Prompts (what people input when they ask questions to an LLM) and research methods for assessing results also tend to vary widely, which is another reason it’s hard to apply existing studies to clinical practice.

One recent study, which looked at how ChatGPT-4o responded to questions about antidepressants, showed that it had accurate and concise answers, but psychiatrists tended to have responses which were clearer. Another recent study on how commonly used LLMs responded to suicide questions showed that chatbots’ responses to questions aligned with clinical experts when it came to very low or high risk questions on suicide but had inconsistency with intermediate risk questions.

[READ: When to See a Psychiatrist.]

The Limits of AI Diagnosis

These studies also reveal the crux of why using LLMs to answer questions on mental health has flaws. Clinical recommendations, even to address simple questions, are based on individualized context. A person’s medical and psychiatric history and results of a mental status exam are nuanced. Expert clinicians are formally trained to elicit and assess these findings based on evidence-based practice to guide clinician decision making, all of which are essential for effective and safe care.

[READ: What to Do During a Mental Health Crisis]

Risks and Ethical Concerns When Using AI for Mental Health

Another important concern when using AI to answer mental health questions relates to privacy. You can’t ensure that the data you input for an LLM will be deleted. Or it may be stored or used to further train the model.

A federal law that protects medical privacy, HIPAA, doesn’t cover the sensitive details you might share. The instant gratification that comes from LLMs’ validating responses also raises concerns about reliance, including to the point of dependence.

Building on this, some researchers have described an additional major risk, sometimes referred to as AI psychosis, or cases of psychotic symptoms, such as delusions, in response to the trigger of prolonged or immersive generative AI usage. However, we need more research to help us understand this phenomenon and define it formally. Finally, LLMs can reproduce bias found in the data on which they’re trained, creating disparities in responses to mental health questions.

[READ: Types of Therapy: Choosing the Right One for You.]

Do’s and Don’ts for Using ChatGPT and LLMs in Mental Health

These examples of limitations highlight a practical truth: LLMs aren’t a substitute for professional care from a psychiatrist or other mental health provider. Other practical do’s and don’ts include:

Do’s for using AI safely

— Do leverage generative AI for general education about mental health topics

— Do pair information from generative AI with expert professional guidance

— Do monitor and set limits on regular usage of AI tools

— Do recognize the risks associated with inputting your personal data

Don’ts: When to avoid using AI for psychiatric care

— Don’t rely on generative AI for diagnosis, crisis management or for responses to questions that could impact your treatment plan

— Don’t look to AI tools as a substitute for personalized care

— Don’t ignore inconsistent or questionable responses from generative AI

If you have a question that could impact your treatment plan, the safest and most effective option is to talk to a licensed clinician. And when people ask me about the use of AI in mental health, I remind them that the best medical care keeps humanism at the center of healing.

More from U.S. News

Hospital Bag Checklist for Mom and Baby: What to Pack for Labor & Delivery

Nursing Home Red Flags You Should Watch For

Best Anti-Aging Superfoods: Healthiest Nutrient-Dense Foods for Older Adults

I’m a Psychiatrist. Here’s What ChatGPT Can (and Can’t) Do for Your Mental Health originally appeared on usnews.com

Federal News Network Logo
Log in to your WTOP account for notifications and alerts customized for you.

Sign up