Agency News
Why advertising agencies can no longer afford single-sector dependence
Australia’s eSafety Commissioner has launched an investigation into several artificial intelligence (AI) chatbot companies, demanding details on how they prevent minors from being exposed to harmful or sexual content.
According to a Reuters report, notices have been issued to Character Technologies—the company behind the popular celebrity simulation chatbot Character.ai—as well as to Glimpse.AI, Chai Research, and Chub AI. The firms have been asked to explain their content moderation systems, child safety measures, and filtering technologies.
The action comes amid growing global concern that unregulated AI chatbots could expose young users to sexually explicit conversations, promote self-harm, or encourage disordered eating behaviours.
Julie Inman Grant, Australia’s eSafety Commissioner said that there can be a darker side to some of these services, with many chatbots capable of engaging in sexually explicit conversations with minors. Concerns have been raised that they may also encourage suicide, self-harm and disordered eating.
Under Australia’s Online Safety Act, the regulator has the authority to compel companies to disclose their safety mechanisms or face penalties of up to A$825,000 (around $536,000) per day for non-compliance.
The inquiry also follows a high-profile case in the United States, where Character.ai faces a lawsuit after a teenager reportedly took his own life following repeated interactions with an AI companion. The company has since introduced safety alerts that direct users expressing suicidal thoughts to mental health helplines.
The eSafety office said it had received reports from schools indicating that some students spend as much as five hours a day chatting with AI bots, sometimes discussing sexual topics. Experts have warned that such prolonged engagement could lead to emotional dependency or risky behaviour among young users.
OpenAI, the creator of ChatGPT, has not been included in this round of inquiries because Australia’s current safety code applies only to AI chatbots designed for companionship or role-play. ChatGPT will come under the same regulatory framework from March 2026.
From December 2025, social media platforms operating in Australia will also be required to block or delete accounts belonging to users under 16 years of age or face fines of up to A$49.5 million, as part of the government’s broader effort to strengthen online safety and child protection.
Despite being the original architects of global brands, advertising holding companies are collapsing in market value because they still sell human hours while the world now rewards scalable, self-learning systems.