From falling in love with ChatGPT to deepfakes of deceased loved ones, artificial intelligence’s potential for influence is vast — its myriad potential applications not yet completely charted. In truth, today’s AI users are pioneering a new, still swiftly developing technological landscape, something arguably akin to the birth of social media in the early 2000s.

Yet, in an age of uncertainty about nascent generative AI’s full potential, people are already turning to artificial intelligence for major life advice. One of the most common ways people use generative AI in 2025, it turns out, is for therapy. But the technology isn’t ready yet.

How people use AI in 2025

As of January 2025, ChatGPT topped the list of most popular AI tools based on monthly site visits with 4.7 billion monthly visitors, according to Visual Capitalist. That dwarfed the next most popular service, Canva, more than five to one.

When it comes to understanding AI use, digging into how ChatGPT is being put to work this year is a good starting point. Sam Altman, CEO of ChatGPT’s parent company, OpenAI, recently offered some insight into how its users are making the most of the tool by age group.

“Gross oversimplification, but like older people use ChatGPT as a Google replacement,” Altman said at Sequoia Capital’s AI Ascent event a few weeks ago, as transcribed by Fortune. “Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system.”

It turns out that life advice is something a lot of AI users may be seeking these days. Featured in Harvard Business Review, author and Filtered.com co-founder Marc Zao-Sanders recently completed a qualitative study on how people are using AI.

“Therapy/companionship” topped the list as the most common way people are using generative AI, followed by life organization and then people seeking purpose in life. According to OpenAI’s tech titan, it seems that generated life advice can be an incredibly powerful influence.

A Pew Research Center survey published last month reported that a “vast majority” of surveyed AI experts said people in the United States interact with AI several times a day, if not almost constantly. Around a third of surveyed U.S. adults said they had used a chatbot (which would include things like ChatGPT) before.

Some tech innovators, including a team of Dartmouth researchers, are leaning into the trend.

Therabot, can you treat my anxiety?

Dartmouth researchers have completed a first-of-its-kind clinical trial on a generative AI-powered therapy chatbot. The smartphone app-friendly Therabot has been in development since 2019, and its recent trial showed promise.

Just over 100 patients — each experiencing depressive disorder, generalized anxiety disorder or an eating disorder — participated in the experiment. According to senior study author Nicholas Jacobson, the improvement in each patient’s symptoms was comparable to traditional outpatient therapy.

“There is no replacement for in-person care, but there are nowhere near enough providers to go around,” he told the college. Even Dartmouth’s Therabot researchers, however, said generative AI is simply not ready yet to be anyone’s therapist.

“While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter,” first study author Michael Heinz told Dartmouth.

“We still need to better understand and quantify the risks associated with generative AI used in mental health contexts.”

Why is AI not ready to be anyone’s therapist?

RCSI University of Medicine and Health Sciences’ Ben Bond is a Ph.D. candidate in digital psychiatry who researches ways digital tools can be used to benefit or better understand mental health. Writing to The Conversation, Bond broke down how AI therapy tools like Therabot could pose some significant risks.

Among them, Bond explained that AI “hallucinations” are known flaws in today’s chatbot services. From quoting studies that don’t exist to directly giving incorrect information, he said these hallucinations could be dangerous for people seeking mental health treatment.

“Imagine a chatbot misinterpreting a prompt and validating someone’s plan to self-harm, or offering advice that unintentionally reinforces harmful behaviour,” Bond wrote. “While the studies on Therabot and ChatGPT included safeguards — such as clinical oversight and professional input during development — many commercial AI mental health tools do not offer the same protections.”

According to Michael Best, Ph.D., a psychologist and contributor to Psychology Today, there are other concerns to consider, too.

“Privacy is another pressing concern,” he wrote to Psychology Today. “In a traditional setting, confidentiality is protected by professional codes and legal frameworks. But with AI, especially when it’s cloud-based or connected to larger systems, data security becomes far more complex.

“The very vulnerability that makes therapy effective also makes users more susceptible to harm if their data is breached. Just imagine pouring your heart out to what feels like a safe space, only to later find that your words have become part of a data set used for purposes you never agreed to.”

Best added that bias is a significant concern, something that could lead to AI therapists giving bad advice.

“AI systems learn from the data they’re trained on, which often reflect societal biases,” he wrote. “If these systems are being used to deliver therapeutic interventions, there’s a risk that they might unintentionally reinforce stereotypes or offer less accurate support to marginalized communities.

“It’s a bit like a mirror that reflects the world not as it should be, but as it has been — skewed by history, inequality, and blind spots.”

Researchers are making progress in improving AI therapy services. Patients suffering from depression experienced an average 51% reduction in symptoms after participating in Dartmouth’s Therabot experiment. For those suffering from anxiety, there was an average 31% drop in symptoms. The patients suffering from eating disorders showed the lowest reduction in symptoms but still averaged 19% better off than before they used Therabot.

It’s possible there’s a future where artificial intelligence can be trusted to treat mental health, but — according to the experts — we’re just not there yet.

About the Author