ADVERTISEMENT
OpenAI is facing tough questions after acknowledging that private conversations with its AI chatbot, ChatGPT, are not entirely confidential. The company confirmed this week that user chats flagged for potential violence can be reviewed by humans and, in urgent cases, shared with law enforcement.
The disclosure has rattled users who believed AI interactions were sealed off from human eyes.
According to OpenAI, the process begins when its automated systems detect a possible threat. Those chats are then escalated to a review team. If reviewers conclude the risk is immediate, OpenAI may alert authorities.
The approach highlights a delicate balancing act: ensuring safety while maintaining user trust. But critics warn it risks backfiring. Location tracking to notify emergency services raises fears of “swatting”, where false reports trigger armed police responses at innocent people’s homes.
The move also clashes with CEO Sam Altman’s public remarks that ChatGPT should feel as private as talking to a therapist or lawyer. With human moderators in the loop, skeptics argue that analogy no longer holds.
Beyond the privacy debate, the episode underscores the broader unease about how quickly AI is being deployed. For millions of ChatGPT users, the message is chatbot may sound like a confidant, but it is not bound by confidentiality.