Advertising
Co-lead or crown? Tussle for Omnicom–IPG leadership race in India heats up
Google is bringing its generative AI capabilities directly into two of its most-used platforms — Search and Lens. The tech giant has begun rolling out an update powered by its Gemini Nano Banana model, allowing users to create, edit and transform images without leaving these apps.
The new functionality is part of Google’s broader effort to make AI creation a native, seamless experience across its ecosystem, combining creativity, search and productivity in one place. The feature runs on the Gemini 2.5 Flash Image model, designed for fast, high-quality visual generation.
What’s new in Google Search
Users accessing AI Mode in Google Search will now see a new “Create Images” option, represented by a banana emoji '????'. Here’s how to use it:
Open Google Search and switch to AI Mode.
Tap the plus (+) icon in the bottom-left corner of the prompt bar.
Choose “Create Images” from the list, alongside Gallery and Camera options.
Type in a description of the image you want to visualise — or upload an existing photo for AI-based edits.
Once processed, you can download or share the results directly.
Each generated image includes a Gemini spark watermark in the bottom-right corner — Google’s visual signature for AI-generated content, ensuring authenticity and transparency.
What’s new in Google Lens
Google Lens is also getting a major AI upgrade. A new “Create” tab, marked with the same banana icon, now appears within the app. It expands Lens’s functionality beyond object recognition and search to include creative image generation.
How to use it:
Open the Lens app and select the Create tab.
The front-facing camera will open by default — you can take a selfie or capture a live image.
Describe how you want to modify it: change the background, apply an artistic effect, or reimagine the style.
The AI instantly processes and displays the results.
The redesigned interface now features wider layouts, repositioned labels, and additional filters for a more intuitive editing experience.
Availability and rollout
The feature is currently available for Android users in the United States through the AI Mode Search Lab. Google plans to expand global access in phases, with multilingual support to follow.
This rollout highlights Google’s strategy to embed generative AI across everyday tools, creating a connected environment where Search, Lens and Gemini work in sync.
By integrating the Gemini Nano Banana model into Search and Lens, Google is effectively merging creativity with utility. Users can now generate and modify visuals on the go — without relying on separate design tools or external apps. It’s a move that not only simplifies digital creation but also positions Google as a frontrunner in responsible, accessible AI experiences built directly into its core products.
According to LinkedIn’s research with over 1,700 B2B tech buyers, video storytelling has emerged as the most trusted, engaging, and effective format for B2B marketers. But what’s driving this shift towards video in B2B? (Image Source: Unsplash)
Read MoreDiscover Arattai, Zoho’s made-in-India messaging app. Features, privacy, user growth, and how it compares to WhatsApp in 2025.