ADVERTISEMENT
The world of Artificial Intelligence has only begun to affect human lives. In times like these, staying up-to-date with the AI world is of utmost importance. Storyboard18 brings you the top AI news of the day.
AI helps develop paint formula to keep buildings up to 20°C cooler
Artificial intelligence–engineered paint could help mitigate the sweltering urban heat island effect in cities and reduce air-conditioning costs, according to scientists who say machine learning is revolutionising materials design for everything from electric motors to carbon capture.
A team of materials researchers has used AI to create new paint formulas that can keep buildings 5 °C to 20 °C cooler than standard paints when exposed to midday sun. The coatings could also be applied to cars, trains, electrical equipment and other surfaces needing better heat management in a warming world.
Using machine learning, scientists from universities in the US, China, Singapore, and Sweden optimised the formulations to reflect more of the sun’s rays while maximising their ability to emit heat, according to a peer-reviewed study published in the journal Nature.
This work is the latest example of AI’s potential to accelerate discovery well beyond the limits of traditional trial-and-error methods. Just last year, British firm MatNex used AI to develop a new kind of permanent magnet for electric vehicle motors that avoids rare earth metals, whose mining is highly carbon-intensive.
People are taking doses of psychedelics — and using AI as a “tripsitter”
Artificial intelligence—already strange and mind-bending in its own right—is now being used by some people in a startling new role: as a psychedelic "trip-sitter" to guide them through hallucinogenic experiences.
According to MIT Technology Review, tech-savvy drug users are turning to everything from standard tools like ChatGPT to custom-built chatbots with names like "TripSitAI" or, less subtly, "The Shaman". It’s the latest twist in a troubling trend where people unable to access real therapy or expert guidance are substituting AI instead.
Earlier this year, Harvard Business Review noted that one of the most popular uses of AI was for therapy. The reasons are clear: insurance companies have often squeezed mental health providers so tightly that many go out-of-network just to stay afloat, leaving lower-income clients with few affordable options.
If regular talk therapy is expensive and hard to get, psychedelic therapy is even more inaccessible. Tech Review points out that in Oregon, where psilocybin therapy is legal, a single session with a licensed guide can cost between $1,500 and $3,200. Faced with those costs, it’s no surprise some are turning to AI for a much cheaper—but potentially riskier—alternative.
In an interview with Tech Review, a man named Peter described what he considered a transformative experience after taking a very large dose of eight grams of psilocybin mushrooms in 2023, guided by AI. Not only did ChatGPT curate a calming playlist for him, but it also offered soothing words and reassurance—much like a human trip-sitter would.
Meesho open-sources BharatMLStack to accelerate AI innovation in startups
In a significant move to democratize access to AI infrastructure, Meesho has open-sourced major components of its internal machine learning platform, BharatMLStack, on GitHub. The release includes its feature store, control plane, orchestration UI, and SDKs, making Meesho one of the first large-scale e-commerce players in India to publicly share its proprietary AI development tools.
Built over the past two years, BharatMLStack powers Meesho’s daily operations by processing massive volumes of data. During the financial year 2024–25, the platform handled an average of ~1.91 petabytes of data per day. At its peak, Meesho’s machine learning systems achieved 66.90 trillion feature retrievals (data signals used for real-time predictions) and 3.12 trillion inferences (real-time predictions generated by ML models).
With this open-source initiative, Meesho aims to lower barriers for startups and developers in India, helping them harness advanced AI infrastructure without the heavy costs or complexity of building from scratch.
X to use AI for drafting community fact-checks
Social media platform X has announced a major shift in how its Community Notes—used to fact-check or add context to misleading posts—will be created. The company will now use artificial intelligence to generate the first drafts of these fact-checks, replacing the previously all-human authoring process, according to The Guardian.
Under the new system, large language models (LLMs) will create initial versions of the notes, which must still be reviewed and approved by human contributors before being published.
The move has drawn criticism from figures like former UK technology minister Damian Collins, who warned that relying on AI for such a sensitive role could fuel the spread of “lies and conspiracy theories.” He also accused X of effectively “leaving it to bots to edit the news.” The change underscores ongoing tensions between tech platforms, AI use, and the integrity of online information.
Businesses paying humans to fix errors in AI-generated content and code
A growing number of businesses are now paying human workers to correct mistakes in content and computer code produced by artificial intelligence, according to a BBC report.
Organizations that rushed to adopt generative AI are discovering that its output often includes factual errors, logical flaws, or nonsensical results, making significant human oversight essential.
This has given rise to new roles and “human-in-the-loop” workflows, dedicated to editing, fact-checking, and debugging AI-generated material. The trend reveals a hidden labor cost in AI adoption and underscores the technology’s current limitations, despite its promise of automating creative and technical tasks.