Skip to content

ChatGPT's Use as a Therapist Carries No Legal Confidentiality, Warns Sam Altman

OpenAI CEO Sam Altman raises alarm: Utilizing ChatGPT for therapy lacks legal confidentiality, according to TechCrunch's July 25, 2025 report. In his recent declaration, Altman underscores the risks associated with employing ChatGPT in sensitive applications, such as therapy, due to the absence...

ChatGPT serves without the protection of therapeutic confidentiality when functioning as a...
ChatGPT serves without the protection of therapeutic confidentiality when functioning as a therapist, according to Sam Altman's warning

AI companies, particularly those deploying AI chatbots in sensitive fields like therapy, are grappling with significant legal and regulatory implications due to the lack of guaranteed confidentiality when using such technologies. This issue has been highlighted by Sam Altman, CEO of OpenAI, who issued a warning on July 25, 2025, about the lack of legal confidentiality when using ChatGPT for sensitive applications like therapy.

Privacy and Data Protection Laws

In jurisdictions such as the EU, the AI Act (effective 2026) sets a risk-based regulatory framework that likely categorizes AI applications in therapy as high-risk due to their sensitive nature. Such high-risk AI systems must comply with strict transparency, robustness, and data governance rules. Moreover, the GDPR’s longstanding privacy protections apply strongly in therapy contexts, where personal health information is involved, meaning companies must ensure that data collected via chatbots is secured and confidential. Failure to maintain confidentiality could lead to substantial penalties, including fines up to 7% of global revenue under the EU AI Act.

U.S. Regulatory Environment

The U.S. currently lacks a single overarching AI regulation; instead, AI use is regulated through a patchwork of federal and state laws, including privacy protections, sector-specific rules, and consumer protection laws. Agencies like the FTC enforce against unfair or deceptive practices, including issues arising from lack of confidentiality or misinformation by AI tools. California’s AI Transparency Act (effective 2026) mainly targets generative multimedia AI but highlights the increasing regulatory trend toward transparency and user notifications, which may extend in future to therapeutic AI chatbots.

Malpractice and Professional Standards

For AI chatbots used in therapy, the absence of confidentiality could create malpractice risk for deploying companies or affiliated professionals if patient information is disclosed or misused. Therapy is typically regulated by professional confidentiality laws. Using AI chatbots without adequate privacy safeguards can violate those standards, potentially leading to legal claims.

Liability and Accountability

The EU AI Act places more regulatory burden on developers, while U.S. rules often emphasize responsibility on AI deployers. Companies providing therapy AI chatbots may need to demonstrate transparency about data usage, clear disclaimers about confidentiality limits, and robust security measures to reduce liability risks.

The U.S. government’s AI Action Plan encourages deregulation but also increased investment and monitoring of AI risks, suggesting firms must remain vigilant about evolving rules. Globally, the EU’s comprehensive AI regulation might serve as a de facto global standard influencing other countries’ approaches, including obligations related to confidentiality and risk mitigation in sensitive AI uses.

In summary, AI companies deploying chatbots for therapy must navigate stringent privacy laws, emerging AI-specific regulations (notably the EU AI Act), and ethical duties around confidentiality. The lack of assured confidentiality poses substantial legal risks, making transparency, compliance, and robust data protection essential to mitigate liability and regulatory penalties.

The July 25, 2025 developments highlight the urgency for policymakers to develop comprehensive regulations around AI privacy and data security. By proactively addressing the challenges of confidentiality and privacy, businesses can position themselves to responsibly harness the transformative potential of AI while safeguarding user privacy and trust. Companies that proactively address these issues and build trust with users will be best positioned to weather the regulatory uncertainty and thrive in the AI-powered future.

The full implications of Altman's comments will take time to unfold as the legal and ethical frameworks around AI evolve. The warning comes amidst an ongoing legal battle with The New York Times, as OpenAI is fighting a court order that would compel it to retain chat logs from hundreds of millions of global ChatGPT users. The July 25, 2025 announcement from OpenAI’s CEO is likely to accelerate the push for comprehensive AI regulations, particularly around data privacy and security. The court order OpenAI is resisting would require it to save chats from hundreds of millions of ChatGPT users worldwide, creating a potential data privacy challenge. The TechCrunch report serves as a warning for organizations implementing AI in sensitive contexts, highlighting the legal and reputational risks associated with relying on AI for confidential interactions. The lack of legal confidentiality could hamper enterprise adoption of AI in other sensitive domains such as finance, law, and human resources.

[1] TechCrunch. (2025, July 25). Sam Altman warns: There’s no legal confidentiality when using ChatGPT as a therapist. [Online]. Available: https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/ [2] California Legislative Information. (2025). AB 2287: California Artificial Intelligence Transparency Act. [Online]. Available: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB2287 [3] White House. (2021, May 7). Executive Order on Promoting Competition in the American Economy. [Online]. Available: https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/07/executive-order-on-promoting-competition-in-the-american-economy/ [4] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). [Online]. Available: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12529-Artificial-Intelligence-Act_en

  1. Businesses in health-and-wellness or mental health sectors, particularly startups deploying AI chatbots for therapy, need to be aware of legal and regulatory implications due to the lack of guaranteed confidentiality.
  2. In jurisdictions like the EU, AI applications in sensitive fields like therapy can be categorized as high-risk under the AI Act (effective 2026), requiring strict compliance with transparency, robustness, and data governance rules.
  3. In the US, the lack of a single overarching AI regulation means AI use is regulated through a patchwork of federal and state laws, necessitating vigilance about evolving regulations, especially in the context of therapeutic AI chatbots.
  4. Companies providing therapy AI chatbots could face malpractice risks if patient information is disclosed or misused, violating professional confidentiality laws.
  5. The EU AI Act and U.S. regulations place more burden on developers and deployers, respectively, pushing companies to demonstrate transparency, disclaim confidentiality limits, and enact robust security measures to reduce liability risks.
  6. Policymakers must proactively address the challenges of confidentiality and privacy to foster responsible AI use, positioning businesses to safeguard user privacy, trust, and maintain regulatory compliance.
  7. The argument for comprehensive AI regulations has been strengthened by Sam Altman's warning about the lack of legal confidentiality when using AI chatbots for sensitive applications like therapy, particularly amidst the ongoing legal battle between OpenAI and The New York Times.
  8. The regulatory uncertainty around AI privacy and security, highlighted by the court order OpenAI is resisting and the urgent need for comprehensive regulations, creates a significant barrier for enterprise adoption of AI in sensitive domains like finance, law, and human resources.

Read also:

    Latest