ADVERTISEMENT
OpenAI CEO and prominent X and Reddit shareholder Sam Altman confessed on Monday that he can no longer distinguish human-written posts from those generated by bots. This "strangest experience," as he described it, came while he was reading posts on a Reddit forum dedicated to a competitor, Anthropic's Claude Code, which was surprisingly filled with praise for his own company's new programming tool, OpenAI Codex.
Altman, an architect of the very technology now blurring the lines of online communication, admitted on X (formerly Twitter) that despite knowing the growth of Codex is legitimate, he "assume[s] it's all fake/bots." He offered a layered analysis of the phenomenon, suggesting that a mix of factors is at play: humans mimicking the "quirks of LLM-speak," the "Extremely Online" crowd's correlated behavior, and "optimization pressure" from social platforms that rewards engagement.
The most biting observation, however, was his suspicion of "astroturfing"—the practice of a company paying people or bots to create the illusion of grassroots support. He alluded to this being a reason for his heightened sensitivity, noting that OpenAI has also been on the receiving end of such campaigns.
This newfound skepticism is particularly ironic given that OpenAI's models were trained on vast amounts of data, including from Reddit, where Altman served on the board until 2022 and remains a major shareholder. The very technology he helped create is now mimicking the human behavior it learned from, creating a feedback loop of imitation.
This crisis of authenticity follows a tumultuous period for OpenAI. The company's own subreddits were filled with anger and frustration after the bumpy release of GPT 5.0, with users complaining about everything from the model’s personality to its unreliability. Altman's subsequent Reddit "ask-me-anything" session, where he confessed to the rollout issues, didn't fully restore trust.
Altman's lament highlights a pervasive problem. According to data security firm Imperva, over half of all internet traffic in 2024 was non-human, with a significant portion attributed to LLMs. The CEO of X's own bot, Grok, has even stated that hundreds of millions of bots are active on the platform.
Some critics have been quick to suggest that Altman's public reflection on the "fakery" of social media is a calculated move to market OpenAI's rumored social media platform, a project The Verge reported on earlier this year. Whether or not this product materializes, the question remains: Can any new social network truly be a "bot-free zone"?
A recent study from the University of Amsterdam suggests not. Researchers created a social network populated entirely by AI bots and found that even without a human algorithm, the bots quickly formed cliques and echo chambers, demonstrating that polarization may be an emergent property of these systems themselves.