Google's approach to protect users from the risks of AI generated media

Google is partnering with the government to continue the dialogue on AI. The partnership includes Google’s upcoming engagement on the Global Partnership on Artificial Intelligence (GPAI) Summit.

By  Storyboard18Dec 19, 2023 9:34 AM
Google's approach to protect users from the risks of AI generated media
Google is addressing Advancing AI responsibly by striking a balance between maximizing its positive impact and addressing its potential risks.(Representational image by Jeffrey Ho via Unsplash)

For more than two decades, Google has worked with machine learning and AI to make its products more helpful. In India, AI has allowed Google to enable language translations at scale, do precise flood forecasts and foster improved agricultural productivity.

Google is partnering with the government to continue the dialogue on AI. The partnership includes Google’s upcoming engagement on the Global Partnership on Artificial Intelligence (GPAI) Summit.

Google is addressing Advancing AI responsibly by striking a balance between maximizing its positive impact and addressing its potential risks. Google is anticipating and testing for a wide range of safety and security risks, including the rise of new forms of AI-generated, photo-realistic, synthetic audio or video content known as “synthetic media”. While this technology has useful applications - for instance, by opening new possibilities to those affected by speech or reading impairments, or new creative grounds for artists and movie studios around the world - it raises concerns when used in disinformation campaigns and for other malicious purposes, through deep fakes. The potential for spreading false narratives and manipulated content can have negative implications.

Providing additional context for generative AI outputs

Google is helping users identify AI-generated content and empowering people with knowledge of when they’re interacting with AI generated media. This is why we've added “About this result” to generative AI in Google Search to help people evaluate the information they find in the experience. We also introduced new ways to help people double check the responses they see in Google Bard by grounding it in Google Search.

Equally, context is important with images, and we’re committed to finding ways to make sure every image generated through our products has metadata labeling and embedded watermarking with SynthID, currently being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic images.

In the coming months, YouTube will require creators to disclose altered or synthetic content that is realistic, including using AI tools, and Google will inform viewers about such content through labels in the description panel and video player.

Implementing guardrails and safeguards to address AI misuse

In the coming months, on YouTube, Google aims to make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process.

Google has a prohibited use policy for new AI releases outlining the harmful, inappropriate, misleading or illegal content we do not allow, based on early identification of harms during the research, development, and ethics review process for our products.

Furthermore, Google has recently updated its election advertising policies to require advertisers to disclose when their election ads include material that’s been digitally altered or generated. This will help provide additional context to people seeing election advertising on its platforms.

Google also has long-standing policies, across our products and services, that are applicable to content created by generative AI. For instance, as part of its misrepresentation policy for Google Ads, Google prohibits the use of manipulated media, deep fakes and other forms of doctored content meant to deceive, defraud, or mislead users.

The policies for Search features like Knowledge Panels or Featured Snippets, prohibit audio, video, or image content that's been manipulated to deceive, defraud, or mislead. And on Google Play, apps that generate content using AI have always had to comply with all Google Play Developer Policies – this includes prohibiting and preventing the generation of restricted content and content that enables deceptive behavior.

Combating deep fakes and AI-generated misinformation

On YouTube, Google uses a combination of people and machine learning technologies to enforce its Community Guidelines, with reviewers across Google operating around the world. AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is helping to continuously increase both the speed and accuracy of its content moderation systems. Google has invested USD $1M in grants to the Indian Institute of Technology, Madras, to establish the first of its kind multidisciplinary center for Responsible AI. This center will foster collective effort — involving not just researchers, but domain experts, developers, community members, policy makers and more – in getting AI right, and localizing it to the Indian context.

First Published on Dec 19, 2023 9:29 AM

More from Storyboard18

How it Works

MRF ends partnership with ICC ahead of T20 World Cup; Other premium partners reconsider sponsorship

MRF ends partnership with ICC ahead of T20 World Cup; Other premium partners reconsider sponsorship

How it Works

Bombay High Court dismissed plea against Amitabh Bachchan, Shah Rukh Khan in surrogate advertising case

Bombay High Court dismissed plea against Amitabh Bachchan, Shah Rukh Khan in surrogate advertising case

How it Works

Ipsos reports 84 percent jump in happiness among full time parents and homemakers

Ipsos reports 84 percent jump in happiness among full time parents and homemakers

How it Works

Global EV sales jump 18 percent in Q1 2024 on PHEV momentum

Global EV sales jump 18 percent in Q1 2024 on PHEV momentum

How it Works

82 percent experience increased operational efficiency with AI: Report

82 percent experience increased operational efficiency with AI: Report

How it Works

The should be just one agency for TV Ratings: IBDF opposes multiple measurement bodies

The should be just one agency for TV Ratings: IBDF opposes multiple measurement bodies

How it Works

Blueprint for Brand Growth by Kantar reveals marketing prioritisation framework

Blueprint for Brand Growth by Kantar reveals marketing prioritisation framework

How it Works

DoT warns against fake calls threatening disconnection of mobile numbers

DoT warns against fake calls threatening disconnection of mobile numbers