While YouTube’s approach to elections is no different from their approach to keeping the platform safe in general, the video sharing and social media platform is taking a thoughtful approach to monitoring content this election season.
Speaking on balancing AI with credibility and content moderation especially in an election year, Timothy Katz, director, Global Head of Responsibility at YouTube said, “AI is integral to various facets of our operations at YouTube. We have been using AI in areas such as content recommendations and content moderation, working in tandem with our human reviewers for a long time. While AI is foundational to YouTube, we acknowledge a significant shift with the rise of generative AI, lowering the barriers to content creation. Responsible AI usage is crucial.”
“In an election year, especially with the decreased barriers to content creation, our focus is on a thoughtful approach to monitoring. It's crucial to note that our regular content policies and community guidelines apply universally to all uploaded content,” he added.
According to Katz, when it comes to identifying content, YouTube employs two essential strategies.
“While the sheer scale of our operations necessitates leveraging machine learning and AI for the swift identification and action against content violations, there are instances where human review becomes imperative. With thousands of human reviewers meticulously assessing content, we ensure adherence to our policies and discern whether the content is violative,” he said.
Katz also explained the the significance of this approach.
“Consider a scenario during an election where a video containing misinformation about the voting process is uploaded. Through the implemented process, we swiftly remove such misleading content. However, if a news organisation were to cover this occurrence, providing contextual information, we would permit the content to remain on our platform. This nuanced approach reflects our commitment to addressing misinformation while preserving the space for thoughtful reporting and user information,” Katz explained.
As per the Google Transparency Report, between April and June 2023, YouTube removed 2,072,210 videos in India for content violation.
Katz also spoke about a four R's (remove, raise, reduce, reward) framework that the platform follows to ensure the quality and credibility of content on YouTube.
“Every uploaded video undergoes scrutiny, beginning with a check against our content policies. Non-compliant content is promptly removed, while compliant content progresses through the framework. For topics of significance, such as news, health, and finance, we prioritise raising up high-quality, credible sources during information-seeking moments. This focus ensures that our users receive recommendations and content from the most reliable sources, especially in sensitive contexts like elections,” Katz said.
The third component, reduce, addresses content that, while not explicitly violating policies, may lack high quality. In such cases, YouTube’s aim is to minimise the promotion and recommendation of borderline content.
Lastly, the platform’s target is to reward trusted partners and creators with ads, fostering financially sustainable businesses.