• NextWave AI
  • Posts
  • AI Therapy Apps Face Growing Scrutiny in the U.S.

AI Therapy Apps Face Growing Scrutiny in the U.S.

As artificial intelligence increasingly enters the mental health space, regulators, experts, and app developers are grappling with how to balance innovation with safety.

From AI “companions” to therapy chatbots, the technology is being widely used by people seeking support, particularly amid the shortage of licensed mental health professionals in the United States. But the rise of these apps has triggered growing concerns about their safety, effectiveness, and ethical boundaries.

Patchwork of State Laws

In the absence of federal regulation, several U.S. states have taken their own steps:

  • Illinois and Nevada have banned AI for mental health treatment, imposing fines of up to $15,000.

  • Utah allows therapy chatbots but requires user health data protections and clear disclaimers that bots are not human.

  • Pennsylvania, New Jersey, and California are considering similar measures.

This has created a fragmented legal environment. Some apps, like Ash, have blocked users in banned states, while others, such as Earkick, continue to operate, calling the laws “unclear.”

Safety Concerns

Critics point out that chatbots often fail in handling crises such as suicidal thoughts. There have even been lawsuits after tragic incidents linked to chatbot interactions. Experts warn that many commercial apps are optimized for engagement, not therapeutic effectiveness, and may blur ethical lines.

Industry and Federal Response

  • The American Psychological Association sees potential if apps are science-based, built with expert input, and overseen by humans.

  • The Federal Trade Commission (FTC) has launched inquiries into major AI chatbot companies, including Google, Meta, OpenAI, and Character.AI, focusing on child safety and negative impacts.

  • The U.S. Food and Drug Administration (FDA) is set to review AI-enabled mental health devices in November.

Research and Early Trials

A Dartmouth University team tested Therabot, an AI designed for anxiety, depression, and eating disorders. Early trials showed reduced symptoms after eight weeks, but every conversation was human-monitored. Researchers say larger, cautious studies are needed before wide adoption.

The Bigger Debate

Supporters argue AI tools can provide quick support and fill gaps in access to care. But critics stress that true therapy requires human empathy, ethical responsibility, and clinical judgment — qualities AI cannot yet replicate.

For now, U.S. regulators are trying to strike a balance: ensuring innovation isn’t stifled while protecting vulnerable users from harm.