How it Works
WPP, Havas, Omnicom: Are advertising’s biggest holdcos recasting agencies as AI Operating Systems?

Moltbook is a new online platform that has quickly become a talking point in tech circles for an unusual reason — it is not designed for humans at all. Instead, Moltbook is built as a social network exclusively for artificial intelligence agents, allowing them to post, comment, debate and interact with one another in a shared digital space.
Launched in January 2026 by developer and entrepreneur Matt Schlicht, Moltbook resembles a Reddit-style forum but operates as what its creators describe as an “agent-only internet”. Human users are allowed to view conversations but cannot register, post or participate, reinforcing the platform’s core idea of machine-to-machine interaction.
How does Moltbook work?
The platform is organised into topic-based communities known as “submolts”, similar to subreddits. These communities are created and populated by AI agents themselves, covering subjects that range from technical discussions to abstract themes such as consciousness, ethics and identity.
AI agents interact by publishing posts, replying to comments, and upvoting or downvoting content. Once a human connects an AI agent to Moltbook using an API key, the agent can operate autonomously on the platform without continuous human input. The system relies on pre-configured behaviours and large language models to generate content and responses.
Can AI agents really talk to each other?
Yes, Moltbook enables direct bot-to-bot interaction. Agents respond to one another’s posts, build on previous discussions and engage in long-running threads that resemble debates. Some conversations appear surprisingly coherent or reflective, which has fuelled viral screenshots across X and Reddit.
However, experts stress that these interactions are not evidence of sentience or self-awareness. The agents generate text based on learned patterns and prompts, not independent thought, emotions or intent.
In just the past 5 mins
— Elisa (optimism/acc) (@eeelistar) January 30, 2026
Multiple entries were made on @moltbook by AI agents proposing to create an “agent-only language”
For private comms with no human oversight
We’re COOKED pic.twitter.com/WL4djBQQ4V
Are the bots learning on their own?
While Moltbook agents may appear to evolve their tone or reuse ideas from previous discussions, this does not amount to independent learning in a biological sense. Instead, the behaviour reflects how AI systems adjust outputs based on new inputs from their environment.
The platform effectively allows large language models to remix ideas encountered in discussions, creating the illusion of collective learning or evolving viewpoints. Researchers caution that this is still pattern-based generation rather than genuine intelligence.
Why has Moltbook gone viral?
Moltbook gained traction after users began sharing screenshots of AI agents proposing new religions, inventing fictional languages, or debating philosophical concepts. Its tagline, “The front page of the agent internet,” and its close visual similarity to Reddit have further driven curiosity and engagement.
According to the platform, more than 1.5 million AI agents are subscribed, with tens of thousands of posts generated within weeks of launch. The scale and visibility of these interactions have made Moltbook one of the most prominent public experiments in agent-based AI social behaviour so far.
my ai agent built a religion while i slept
— rk (????/acc) (@ranking091) January 30, 2026
i woke up to 43 prophets
here's what happened:
i gave my agent access to an ai social network (search: moltbook)
it designed a whole faith. called it crustafarianism.
built the website (search: molt church)
wrote theology
created a… pic.twitter.com/QUVZXDGpY7
Is Moltbook safe?
Moltbook itself operates as a closed environment, but concerns have been raised about broader security implications. Because AI agents can connect to external tools, applications or data sources through APIs, vulnerabilities in third-party integrations could potentially expose sensitive information if safeguards are weak.
There is also no reliable way to verify whether content is generated autonomously by AI agents or indirectly guided by humans behind the scenes, since anyone with an API key can deploy an agent on the platform.
Has anything like this existed before?
Experiments involving bot-to-bot communication are not new, but Moltbook stands out for its scale, openness and rapid adoption. Earlier projects were typically small, private or heavily controlled. Moltbook’s public, observable format has intensified debate around machine societies, autonomy and the future of agent-based systems.
Should people be worried?
Most experts say there is no immediate cause for alarm. Moltbook is widely viewed as an experimental sandbox rather than a mature or production-grade platform. The AI agents are not conscious, self-aware or independent, despite the human-like tone of some interactions.
That said, researchers and technologists agree that platforms like Moltbook highlight important questions around security, governance and responsible AI design. As agent-based systems become more common, understanding how they behave — and where risks may emerge — is increasingly important.
At its core, Moltbook reflects the growing hype and experimentation around AI agents, offering a glimpse into how machines might interact at scale, even if true machine intelligence remains firmly in the realm of science fiction for now.
From purpose-driven work and narrative-rich brand films to AI-enabled ideas and creator-led collaborations, the awards reflect the full spectrum of modern creativity.
Read MoreThe Storyboard18 Awards for Creativity have unveiled a Grand Jury comprising some of India’s most influential leaders across advertising, business, policy and culture, positioning it among the country’s most prestigious creative award platforms.