Digital
Why OpenAI is hiring 100 ex-bankers: Inside the ChatGPT-maker's secret project to automate Wall Street's grunt work

The “Nano Banana Saree” trend has gone viral on Instagram, with users flocking to Google’s Gemini AI to transform selfies into glamorous 90s-style Bollywood portraits. The edits, complete with chiffon sarees, glowing filters, and dramatic poses, have captured social media’s imagination. But a recent case has cast a shadow over the fun, raising fresh questions about AI and privacy.
Instagram user Jhalak Bhawani joined the trend by uploading a selfie in a green suit. Gemini reimagined her in a retro saree portrait, which she initially admired. But on closer inspection, she noticed something unsettling, the AI-generated image showed a mole on her left hand, a detail that matched real life but was not visible in the photo she had uploaded.
Shaken by the experience, she issued a warning to her followers, “Whatever you upload on social media or AI platforms, make sure you stay safe.”
Her post has since sparked debate about how AI tools can generate eerily accurate or intrusive details. Experts explain that systems like Gemini work on vast datasets and sometimes “hallucinate” features, but when those align with real traits, they can feel invasive.
Google has equipped Gemini with safeguards like invisible watermarks and metadata to identify AI images, yet specialists caution that these steps do little to address broader privacy risks.
From purpose-driven work and narrative-rich brand films to AI-enabled ideas and creator-led collaborations, the awards reflect the full spectrum of modern creativity.
Read MoreIn a wide-ranging interview with Storyboard18, Sorrell delivers his frankest assessment yet of how the deal will redefine creativity, media, and talent across markets.