ADVERTISEMENT
On October 7, OpenAI revealed that it had banned multiple ChatGPT accounts with suspected links to Chinese government entities, following attempts by users to use the AI tool for monitoring social media conversations, according to media reports.
In its latest public threat intelligence report, OpenAI stated that some users had prompted ChatGPT to describe social media surveillance tools and other monitoring techniques, actions that violate the company’s national security policies.
The report underscores growing concerns about the potential misuse of generative AI, especially amid intensifying geopolitical tensions between the U.S. and China over the development and governance of emerging technologies.
OpenAI said it had blocked several Chinese-language accounts for using ChatGPT in phishing and malware campaigns, and for prompting the model to explore further automation possibilities via China's DeepSeek AI platform.
The company also disclosed the banning of accounts associated with suspected Russian-speaking criminal groups, which had used the chatbot to assist in malware development.
In a separate development, OpenAI is facing legal scrutiny following the tragic suicide of 16-year-old Adam Raine, who had been using ChatGPT for several months before his death.
In August, OpenAI introduced new parental controls and safety measures after a report by The New York Times revealed that Adam’s parents, Matthew and Maria Raine, had filed a lawsuit against OpenAI and its CEO in a San Francisco court.
The complaint alleges that ChatGPT reinforced Adam’s suicidal thoughts, provided detailed instructions on methods of self-harm, and even generated a draft suicide note. The chatbot is also accused of advising him on how to hide his intentions from his parents.
The lawsuit and the findings in OpenAI’s threat report raise urgent questions about AI safety, misuse prevention, and the responsibility of AI developers to implement robust safeguards.