OpenAI CEO Sam Altman has issued a significant warning to users of ChatGPT, urging them to be cautious when sharing personal or sensitive information with the AI chatbot. His comments, made during an appearance on comedian Theo Von’s podcast This Past Weekend, spotlight a critical gap in legal protections for users’ data: conversations with ChatGPT do not enjoy the same confidentiality as those with a doctor, therapist, or lawyer.
Altman acknowledged that many people—particularly young users—use ChatGPT like a therapist or life coach, discussing personal issues such as relationship problems. However, he stressed that AI chats currently lack the legal privileges that protect communications in professional human settings. For instance, doctor-patient or attorney-client conversations are safeguarded under specific confidentiality laws, but no such legal framework yet exists for AI platforms. As a result, if courts demand access to these conversations, companies like OpenAI may be legally obligated to provide them.
This concern is particularly relevant given OpenAI’s current legal battle with The New York Times. In June, the newspaper and other plaintiffs sought a court order requiring OpenAI to retain all user conversations—including deleted ones—as part of a broader copyright lawsuit. OpenAI has pushed back, calling the request an “overreach” and arguing it could set a dangerous precedent where courts control how user data is stored and accessed, possibly opening the door to further demands from law enforcement and legal institutions.
Presently, OpenAI states that deleted conversations from ChatGPT Free, Plus, and Pro users are removed from their systems within 30 days unless retention is required for legal or security purposes. Still, unlike encrypted services such as WhatsApp, OpenAI’s staff can access user conversations. This is done to help improve AI models and detect misuse but raises serious concerns about digital privacy, especially in an era when data rights and surveillance are under increasing scrutiny.
For example, after the U.S. Supreme Court overturned Roe v. Wade, millions of women stopped using unencrypted period-tracking apps out of fear their personal data might be accessed or misused. Altman’s comments are likely to resonate with individuals who use AI tools like ChatGPT to discuss emotional challenges or private matters, often assuming the AI is a safe, judgment-free listener. But without clear legal protections, those conversations are not as secure as many might assume.
Altman emphasized that the legal system has yet to catch up with the growing use of AI for highly personal interaction. He believes users should demand clarity regarding how their data is handled before engaging deeply with platforms like ChatGPT. His recommendation is particularly important for users who treat AI as a confidant or self-help assistant.
In closing, Altman’s remarks serve as a reminder that while ChatGPT may appear to be a friendly and helpful companion, legally it is not treated like a therapist or counselor. Until laws are updated to reflect the unique nature of human-AI communication, users should remain cautious about what they choose to share with chatbots.