AI boosts online political discourse quality, study finds, but caveats remain

AI models have faced scrutiny for inherent biases—political and even racial and for operating as 'black boxes' where internal processes are untraceable

By  Storyboard18Jul 28, 2025 8:47 AM
AI boosts online political discourse quality, study finds, but caveats remain
While promising, experts caution against over-reliance on AI for regulating online discourse. Hansika Kapoor, a research author at Mumbai's Monk Prayogshala, an independent not-for-profit academic research institute, emphasized that context, culture, and timing are crucial. She noted that the study itself used human raters for responses.

Can artificial intelligence make online political debates more civil? New research suggests it can. A study published in the journal Science Advances found that AI-generated, polite, and evidence-based responses significantly improved the quality of online political conversations and increased individuals' openness to differing viewpoints, though it didn't alter their core political beliefs.

Researchers trained a large language model (LLM), an AI system powered by vast amounts of text data, to respond to political posts from participants in the US and UK. When the AI provided polite, fact-based counterarguments, it nearly doubled the chances of a high-quality online conversation and substantially boosted participants' receptiveness to alternative perspectives.

"An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points," the authors noted in their study.

Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, highlighted the potential for LLMs to offer "light-touch suggestions" such as alerting users to disrespectful tones in their posts. He suggested AI could also be integrated into school curricula to teach best practices for discussing contentious topics.

The study, involving nearly 3,000 participants (Republicans and Democrats in the US; Conservative and Labour supporters in the UK), had them create social media-style posts on political issues. A ChatGPT-powered "fictitious social media user" then tailored counterarguments based on their positions, to which participants responded.

While promising, experts caution against over-reliance on AI for regulating online discourse. Hansika Kapoor, a research author at Mumbai's Monk Prayogshala, an independent not-for-profit academic research institute, emphasized that context, culture, and timing are crucial. She noted that the study itself used human raters for responses.

AI models have faced scrutiny for inherent biases—political and even racial and for operating as 'black boxes' where internal processes are untraceable. Eady expressed apprehension about "using LLMs to regulate online political discussions in more heavy-handed ways."

The researchers also acknowledged that the two-party systems in the US and UK simplified addressing partisan texts. Eady warned that the ability of LLMs to moderate discussions "might also vary substantially across cultures and languages, such as in India," where numerous political affiliations exist.

Kapoor echoed this, stating that applying this strategy in India would likely require "some trial-and-error" due to the nation's diverse political landscape and varied issues, including "food politics."

In related findings, another study in 'Humanities and Social Sciences Communications' indicated that dark personality traits (like psychopathy and narcissism), a fear of missing out (FoMO), and cognitive ability can influence online political engagement. This research, based on data from the US and seven Asian countries including China, Indonesia, and Malaysia, suggested that individuals with both high psychopathy and low cognitive ability are the most active in online political engagement.

First Published on Jul 28, 2025 8:47 AM

More from Storyboard18

How it Works

Google appears before ED in illegal betting ads probe, Meta a no-show

Google appears before ED in illegal betting ads probe, Meta a no-show

Digital

Mercedes-Benz to allow Teams video calls while driving, here's how it works

Mercedes-Benz to allow Teams video calls while driving, here's how it works

How it Works

India joins global push for safe AI as domestic framework takes shape

India joins global push for safe AI as domestic framework takes shape

Digital

Tesla confirms long-term chip supply deal with Samsung

Tesla confirms long-term chip supply deal with Samsung

Digital

Starlink’s reach capped at 20 lakh users in India

Starlink’s reach capped at 20 lakh users in India

Digital

Today in AI | Neuralink patient signs telepathically | Gujarat approves AI action plan | Geoffery Hinton calls out tech giants

Today in AI | Neuralink patient signs telepathically | Gujarat approves AI action plan | Geoffery Hinton calls out tech giants

Digital

Employee collects Rs 5 lakh while leaving work within 2 minutes daily

Employee collects Rs 5 lakh while leaving work within 2 minutes daily

Digital

Gujarat approves 5-year action plan to integrate AI in governance

Gujarat approves 5-year action plan to integrate AI in governance