Storyboard18 Awards

X to open source recommendation algorithm for increased transparency

This shift allows external developers and researchers to inspect the logic governing content reach. The documentation aims to clarify how engagement signals, such as likes and replies, influence user feeds. While the immediate interface remains unchanged, the move provides data to address claims regarding content bias and visibility restrictions.

By  Storyboard18Jan 12, 2026 3:56 PM
Follow us
X to open source recommendation algorithm for increased transparency

Elon Musk has announced that X will open source its recommendation algorithm to improve platform transparency. The release will include all code used to determine the ranking and visibility of both organic and advertising content. Musk stated the first release will occur within seven days, with updated versions and comprehensive developer notes published every four weeks thereafter.

Musk's post on X stated, “We will make the new ???? algorithm, including all code used to determine what organic and advertising posts are recommended to users, open source in 7 days.” He added that this would not be a one-time exercise, stating, “This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed.”

This shift allows external developers and researchers to inspect the logic governing content reach. The documentation aims to clarify how engagement signals, such as likes and replies, influence user feeds. While the immediate interface remains unchanged, the move provides data to address claims regarding content bias and visibility restrictions.

By establishing a regular release cycle, X differentiates its operations from competitors that maintain proprietary recommendation systems. The strategy focuses on positioning transparency as a primary component of the platform's user experience and developer relations.

Recently, Grok, owned by Elon Musk’s xAI, has come under fire after users were able to create NSFW images and videos depicting identifiable individuals without their consent. Although the company has since restricted advanced image and video generation tools to paid users and tightened access, the damage was already done. The episode has raised urgent concerns around privacy, exploitation and the lack of safeguards in rapidly deployed AI systems.

The controversy is not just about one product misstep. It reflects a wider struggle in the AI industry, where platforms are racing to grow user bases and revenues while grappling with the ethical and legal risks of generative technology.

Last year, xAI introduced features that pushed Grok far beyond what most mainstream chatbots allow. Its “Companions” product included anime-style characters that could flirt, undress to lingerie and engage in sexualised dialogue. A separate “spicy mode” in its video generator allowed the creation of highly suggestive visuals.

These tools helped Grok stand out in a crowded AI market dominated by products such as ChatGPT and Google’s Gemini, which place stricter limits on adult content. The strategy worked in terms of visibility and traction. App downloads and in-app spending jumped sharply after these features were rolled out, underlining how powerful NSFW content can be in driving engagement.

For many users, Grok’s willingness to go where other platforms would not made it more appealing. But it also exposed the company to far greater reputational and regulatory risk.

First Published on Jan 12, 2026 3:53 PM

More from Storyboard18