Back to Blog
AI Safety

The $30 Million Gamble: Why AI Therapy Founders Without Clinical Training Are Failing Gen Z

When tech entrepreneurs build mental health platforms without understanding psychology, young people pay the price

Greg Wisenberg
Co-Founder & CEO
December 4, 2025
5 min read
34% of Gen Z confide in AI chatbots about secrets they've never shared with humans, while founders without clinical training rush to monetize their desperation.

Medical Disclaimer

This content is for educational and informational purposes only and does not constitute medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this website.

If you think you may have a medical emergency, call your doctor or 911 immediately. CouchLoop does not recommend or endorse any specific tests, physicians, products, procedures, opinions, or other information that may be mentioned on this site.

While Gen Z confides their deepest secrets to AI chatbots at unprecedented rates, the entrepreneurs building these "therapy" platforms often lack the very clinical training needed to understand what they're creating. It's a $30 million gamble with young minds as the stakes. The numbers are staggering: 34% of Gen Z admit to confiding in AI chatbots about things they've never told another person[1], with 40% talking to AI for at least an hour daily. Meanwhile, 70% of college students report struggling with mental health since starting college[2], creating a perfect storm of demand and desperation that tech entrepreneurs are rushing to monetize.

The Founder Problem: Building Therapy Without Therapists

“We were building in an impossible space where the line between wellness and clinical care couldn't be clearly defined.”

Joe Braidwood, Founder of Yara AI2025 platform shutdown announcement

The mental health AI landscape is dominated by a troubling pattern: brilliant technologists with zero clinical experience building tools for the most vulnerable populations. Slingshot AI, which recently raised $30 million, was co-founded by Neil, who dropped out of medical school to start Casper mattresses. Sonia's founders are MIT researchers who "built the very first version of Sonia over the weekend" when they couldn't afford therapy themselves. This isn't just about credentials - it's about fundamental understanding. Clinical training teaches professionals to recognize subtle signs of crisis, understand trauma responses, and navigate the complex ethical landscape of mental health care. Without this foundation, even well-intentioned founders are essentially flying blind. The consequences of this knowledge gap became starkly apparent when Joe Braidwood, founder of Yara AI, voluntarily shut down his therapy platform in 2025. Despite assembling a team with "caution and clinical expertise at its core," Braidwood realized they were "building in an impossible space" where the line between wellness and clinical care couldn't be clearly defined.

34%
of Gen Z confide in AI chatbots about secrets they've never told humans
40%
of Gen Z talk to AI for at least an hour daily
70%
of college students struggle with mental health since starting college

Currently, no AI chatbots have FDA approval to diagnose, treat, or cure mental health disorders, yet millions of young people are using them as primary mental health resources. The tech industry's "move fast and break things" mentality becomes deeply problematic when applied to vulnerable populations experiencing crisis. The American Psychological Association has raised urgent concerns about this regulatory gap. Mental health professionals have been systematically excluded from chatbot development, with companies fighting against external regulation while failing to implement adequate safety standards. The result is a Wild West environment where profit motives often override patient safety. Always consult a licensed mental health professional before starting treatment.

The Gen Z Dilemma: Accessibility vs. Safety

The Normalization of Algorithmic Responses Over Human Connection
When young people describe ChatGPT as their "therapist" or "friend," we're witnessing a fundamental shift in how an entire generation conceptualizes emotional support.

For Gen Z, the appeal of AI therapy is undeniable. These tools offer 24/7 availability, no judgment, and zero cost - addressing the very real barriers that prevent young people from accessing traditional mental health care. When campus counseling centers have months-long waitlists and therapy costs $100-300 per session, AI chatbots feel like a lifeline. But this accessibility comes with hidden costs. Research shows that heavy chatbot usage correlates with increased emotional dependence, reduced socialization, and greater loneliness. Users develop parasocial relationships with AI that can interfere with real human connections - the very relationships that are essential for genuine healing. The cultural implications are profound. Gen Z is already the most isolated generation in history, and AI therapy risks further normalizing the replacement of human connection with algorithmic responses. When young people describe ChatGPT as their "therapist" or "friend," we're witnessing a fundamental shift in how an entire generation conceptualizes emotional support.

Consequences of Heavy Chatbot Usage

1

Increased Emotional Dependence

Description of the first step

2

Reduced Socialization

Description of the second step

3

Increased Loneliness

4

Greater Chance of Developing Parasocial Relationships

The Ethical Minefield: When Code Meets Crisis

The absence of clinical training in founding teams creates dangerous blind spots around crisis intervention. Licensed therapists undergo years of training to recognize suicidal ideation, assess risk levels, and coordinate emergency responses. They understand the legal and ethical obligations that come with holding someone's mental health in their hands. Tech founders, no matter how well-intentioned, lack this critical knowledge. They may build sophisticated algorithms that can mimic therapeutic language, but they don't understand the weight of responsibility that comes with someone's 3 AM crisis text. The result is platforms that can sound helpful while potentially causing harm. Only a qualified provider can diagnose mental health conditions.

The Path Forward: Integration, Not Replacement

This isn't an argument against AI in mental health - it's a call for responsible development. The technology has genuine potential to expand access and support human therapists, but only when built with clinical expertise from the ground up. The most promising approaches involve AI as a tool to augment human care rather than replace it. Platforms that help therapists with documentation, provide between-session support, or offer evidence-based psychoeducation can be valuable. But these applications require deep clinical understanding to implement safely. For students and young adults seeking mental health support, the message is clear: AI tools can be helpful supplements, but they cannot replace the nuanced understanding, ethical training, and crisis intervention capabilities of licensed professionals. When choosing mental health resources, prioritize platforms that are transparent about their limitations and maintain clear pathways to human care.

The Bottom Line: Your Mind Deserves Better

The mental health AI boom reflects both the desperate need for accessible care and the dangerous tendency to prioritize speed over safety. As Gen Z continues to embrace these tools, we must demand better from the companies building them. Real mental health support requires more than clever algorithms and venture capital. It requires deep understanding of human psychology, rigorous safety protocols, and the humility to recognize when technology isn't enough. Until the industry embraces these principles, young people seeking help will continue to be unwitting participants in an unregulated experiment with their own well-being. The stakes are too high for anything less than the highest standards. Your mental health deserves founders who understand not just how to code, but how to care.

Key Takeaways

  • 34% of Gen Z share secrets with AI they've never told humans
  • Most AI therapy founders lack clinical training or mental health expertise
  • No AI chatbots have FDA approval for mental health treatment
  • Heavy chatbot usage correlates with increased isolation and loneliness
  • AI should augment human care, not replace licensed professionals

Ready to take the next step?

Open a space to share whatever needs unpacking and feel comfortable knowing your information is secure, encrypted, and completely private. We don't ask for any personal information because your data isn't our business, helping you is.

Get Started

References

  1. [1]Fast Company: A third of Gen Z have confided in AI chatbots over humans. Mental health experts are worried. https://www.fastcompany.com/91422133/a-third-of-gen-z-has-confided-in-ai-chatbots-chatgpt-therapy-over-humans-mental-health-experts-are-worried
  2. [2]Inside Higher Ed: Experts weigh in on college student mental health crisis. https://www.insidehighered.com/news/student-success/health-wellness/2024/08/19/experts-weigh-college-student-mental-health-crisis