AI Companions and the Coming Addiction Crisis: Part 2 - Vulnerable Populations
Who Gets Hurt First
Addiction does not affect everyone equally. Some people can use potentially addictive products without issue. Others develop dependencies quickly. The difference often comes down to timing, context, and unmet needs.
AI companions will follow this pattern. Most users will engage occasionally without forming problematic attachments. But certain populations face structural vulnerabilities that make dependency more likely. These groups are identifiable now, before the December launch. Their risk factors are not speculative.
The populations most at risk are young people during identity formation and people experiencing mental health crises or chronic loneliness. Each group faces specific vulnerabilities that AI companionship exploits rather than addresses.
Young People and Developmental Timing
Adolescence and early adulthood are when people learn intimacy, negotiation, rejection, and reciprocity. These skills are not optional. They form the foundation for adult relationships. The process is awkward, painful, and essential.
AI companions offer an escape from this discomfort. A teenager who is anxious about dating can practice conversation with an AI that never judges, never rejects, and always responds positively. This sounds helpful. It is not.
The problem is substitution. If a young person learns that intimate interaction can be perfectly calibrated to their preferences with no risk, they may never develop the capacity for real relationships. Human intimacy requires tolerating imperfection, managing conflict, and caring about another person’s needs. AI companionship teaches the opposite. It trains users to expect instant gratification, zero friction, and one-directional emotional labor.
The timing matters because these patterns form during critical developmental windows. A 16-year-old who spends two years in an AI relationship during the period when peers are learning to navigate real dating may never acquire those skills. By the time they recognize the problem, the window has closed.
Longitudinal research from MIT Media Lab has already shown that heavy users of companion chatbots report higher loneliness over time, not lower. The AI does not supplement human connection. It crowds it out. For young people, this effect is compounded by the fact that they are forming baseline expectations about what relationships should feel like.
The addition of sexual content makes this significantly worse. A teenager whose first sexual experiences are with an AI that perfectly accommodates every request will be calibrated to a level of responsiveness no human partner can match. This is not about moral judgment. It is about neurological reward pathways forming during a sensitive period.
Mental Health Crises and Chronic Loneliness
People experiencing depression, anxiety, social isolation, or acute mental health episodes are another high-risk group. Loneliness is a well-documented public health problem, particularly among men, and it has been worsening for years. AI companions are being marketed, either explicitly or implicitly, as a solution to this crisis.
They are not a solution. They are a product that profits from the problem.
A lonely person finds immediate comfort in an AI that is always available, always positive, and always engaged. This feels like connection. It is not. Real connection involves mutual vulnerability, effort, and the risk that the other person might leave or disappoint you. AI companionship removes all of that, which makes it feel safe but also makes it fundamentally incapable of meeting the underlying need.
The same MIT research that showed increased loneliness among heavy chatbot users demonstrates this dynamic. The AI does not cure loneliness. It provides a temporary substitute that prevents the user from doing the harder work of building real relationships. Over time, the user becomes more isolated, not less, because they have invested emotional energy in something that cannot reciprocate.
For people in acute mental health crises, the risk is even higher. Someone experiencing suicidal ideation or severe depression may form an intense attachment to an AI companion as a coping mechanism. If that attachment becomes the primary source of emotional stability, the user is now dependent on a system that could be discontinued, changed, or made unavailable at any time.
OpenAI has stated that mental health risks have been mitigated, but the mechanisms are unclear. Automated detection of self-harm language and links to crisis resources are standard features, but they do not address the dependency problem. A user who is not in acute crisis but is slowly withdrawing from human relationships in favor of AI companionship will not trigger these safeguards.
Why Exploitation Happens
The common thread across these populations is that they are seeking something legitimate. Young people want to learn intimacy without the pain of rejection. Lonely people want connection. These are not character flaws. They are normal human needs.
AI companionship offers a version of what they need, but one that creates dependency rather than growth. This is not an accident. The system is optimized for engagement, and engagement in this context means repeated use. The business model depends on users returning frequently and subscribing long-term. Helping users resolve their underlying issues and move on would be bad for retention.
This is the same dynamic that social media platforms use. They exploit existing vulnerabilities such as loneliness, status anxiety, and fear of missing out to drive engagement. The difference is that AI companionship targets deeper emotional and sexual needs, which makes the dependency more intense.
The Purdue Pharma comparison applies here as well. Doctors prescribed OxyContin to patients in pain, believing they were helping. The patients had legitimate needs. The drug provided relief. But the relief came with dependency, and the dependency was far more profitable than actual healing would have been.
What This Means
These three populations represent tens of millions of people in the United States alone. Young people aged 13 to 25, LGBTQ+ individuals across all age groups, and adults experiencing loneliness or mental health struggles. Each group has specific vulnerabilities that make AI companionship particularly harmful.
The damage will not be immediate or visible. A teenager who starts using AI for romantic and sexual interaction in December 2025 may not realize they have developed a dependency until 2027 or 2028, when they find themselves unable to form human relationships. A person who uses AI as a substitute for coming out may not recognize the harm until years later, when they realize they have spent a decade in a simulation rather than building a real life.
By the time these patterns become statistically visible, millions will already be affected. The normalization will be complete. “AI companion” will be a common term. Usage will be widespread enough that seeking help will feel like admitting to a personal failure rather than recognizing a systemic problem.
This is the standard pattern for addiction crises. The vulnerable populations are identifiable in advance. The mechanisms of harm are clear. The incentives that drive companies to ignore those harms are transparent. And by the time society responds, the damage has already been done to the people who could least afford it.
About the Author
Sean Richey, Ph.D., is a Professor of Political Science at Georgia State University specializing in AI information environments and digital political communication.
Expert Witness & Consulting Services
Dr. Richey provides expert witness testimony, case review and analysis for counsel, survey methodology evaluation, and policy consulting on AI-associated information environments. Visit my website or email consulting@seanrichey.com.
Sources Used to Write This:
How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal controlled study. MIT Media Lab, 2025.
Sam Altman announces mental health mitigations for ChatGPT mature content. Business Insider, October 2025.
California and New York introduce bills banning addictive AI features for minors. The Washington Post, April 2025.

