ADVERTISEMENT
OpenAI has forcefully refuted viral rumors claiming the company has banned its flagship chatbot, ChatGPT, from providing health and legal information. The company clarifies that the model's core behavior and policy on sensitive topics remain unchanged.
Karan Singhal, OpenAI’s Head of Health AI, took to X (formerly Twitter) to directly debunk the speculation: "Not true. Despite speculation, this is not a new change to our terms. Model behaviour remains unchanged."
The Original Context: The confusion stemmed from a widely circulated, but since-deleted, tweet that incorrectly claimed a total ban on health or legal advice.
The updated usage policy, released on October 29, includes a clause prohibiting the "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional."
OpenAI explained that this restriction is not new. It merely reinforces long-standing rules that prohibit using ChatGPT as a substitute for a licensed doctor or lawyer.
The recent update consolidated three separate policy documents (universal, ChatGPT-specific, and API terms) into a single, standardized list for consistency.
ChatGPT remains fully capable of answering general health and legal questions, explaining concepts, or providing general information, but users should understand its responses are for informational purposes only, not personalized professional advice.