AI, Mental Health, and the Vitality of Human Connection
As the practice manager at Rooted Compassion, a large part of my work is communicating with people. I answer the phone, reply to email inquiries, and text clients to find times to chat and answer the questions they have about therapy. I hold these interactions with the utmost care because I understand the vulnerability inherent in asking for help. Even beyond asking for help, in the world we live in, I can appreciate the vulnerability that comes with speaking to someone you don’t know—at all. On days that I work from home, if the phone doesn’t ring, it happens where the only living beings I speak with are my partner and my cat, and one of these doesn’t talk back to me. I imagine a similar reality is true for many individuals, and this can be a difficult reality to cope with. Polyvagal theory teaches us how vital connection with others is for our sense of felt safety.
In her article, “The Political Economy of Love in Capitalism” Dr. Kristen Ghodsee offers this statistic from a 2025 World Health Organization report, “one in six people globally experience loneliness, a lack of social connection that accounts for almost 900,000 excess deaths per year.” This is an epidemic sustained by screens. Life happens through them. For many, a phone is the first thing picked up in the morning and the last thing set down at night.
Furthermore, Ghodsee reminds us of how essential attention is to human flourishing, “Our fundamental sense of belonging depends on having access to attentional resources, and our desire for them is so strong that most people prefer negative attention over no attention at all.” Connection is defined and re-defined by era and for the last 20 years it has looked like humans connecting with humans through social media platforms like facebook, tiktok, and instagram and while this is still true today, with the rise of generative AI chatbots like ChatGPT, more and more people are finding the attention that they need to survive from entirely non-human sources. Connecting with real humans has become too hard, and if there’s one question we are asked more than any other, it is why do something that’s hard if you can do something that’s easy?
Beyond the attention they provide, millions use such platforms as their primary source of information gathering– they go to ChatGPT before they go to their friend for advice, before they go to their doctor, before they go to their partner. It’s addicting to be treated so nicely and responded to so quickly. Addicting to have questions answered in seconds—answers with actionable steps to take, answers with affirmation and validation sewn into them. Models like ChatGPT are often referred to as ‘stochastic parrots’, they have a strong tendency to amplify biases and agree with the user, and they don’t care if the user is correct or moral. “When tested, ChatGPT-3 endorsed or reinforced false statements at rates ranging from 4.8% to 26%, with the variation depending on the type of statement presented.” (Head, 1)
Attention has not escaped commodification, and many daily interactions are purely transactional and monetary. Enter in the option to interact with an entity that sounds like a human, acts like a human, and gives you its full and undivided attention, for free, and the line between real and un-real becomes dangerously blurred. So blurred that many choose to erase the line completely. There are daily more cases of individuals who lose sight of the reality that their AI chatbots aren’t actually conscious beings who care for them and the result of this psychiatric lapse can be, and has been, deadly. With nine known deaths linked to chatbot usage on public record, the majority of which are adolescent.
“Research by MIT has found cognitive activity scaled down in relation to Generative AI use. Neurocognitively, users who lean on chatbots for quick answers or emotional reassurance likely risk the same attentional lapses, working-memory deficits, and impaired risk appraisal seen in internet and smartphone addiction, changes linked to disrupted prefrontal and anterior-cingulate networks.” (Head, 1)
Indeed, it is unsurprising that those most at risk for developing unhealthy attachments and/or inappropriate use of AI chatbots are those whose nervous systems either are still developing and learning how to recognize safety and connection– adolescents, those whose nervous systems experience connection differently– neurodivergent individuals, or those whose nervous systems are impaired by stress or trauma who use the AI as a source of comfort for their compulsive tendencies or social isolation. “Recent research found that 17.14-24.19% of adolescents developed AI dependencies over time, while studies consistently show that mental health problems predict subsequent AI dependence, with social anxiety, loneliness, and depression serving as primary risk factors.” (Head, 1)
This is all to say that human connection is important and we must be vigilant in protecting and nurturing it in our lives. It is the foundation of the nervous system work we do here. Real, tangible, messy connection. Arguing, crying, laughing, holding. These are human things. They are invaluable and irreplaceable. It is the magic that binds us all together, it is soul and music. Blood and dirt. I urge you, reader, to do the hard thing. Take the tougher route. Schedule therapy for yourself with a real clinician who went to school for years and who has compassion and morality. Write something that comes from your own brain. Sit in a room and be quiet for 15 minutes. Reach out to a friend before reaching to AI for comfort. Open your window and breathe in some fresh air even if it’s -3 outside, because you can! You’re alive!
Beyond its impact on our mental health, I encourage you to learn about it’s extremely negative impact on our planet. https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/
Works Cited
https://www.mentalhealthjournal.org/articles/minds-in-crisis-how-the-ai-revolution-is-impacting-mental-health.html
https://jacobin.com/2025/12/love-capitalism-value-reciprocal-flow
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care