Explainer | What is AI Superintelligence and Why are billionaires and biz moguls calling for its ban

AI superintelligence refers to a future form of AI surpassing human intelligence in all domains. Global leaders, including Wozniak, Branson, Hinton, and Prince Harry, are urging a ban, warning of existential, economic, and security risks without proper safeguards.

By  Storyboard18| Oct 22, 2025 5:10 PM
Artificial Superintelligence (ASI) refers to a theoretical form of AI that would surpass human intelligence across virtually all domains — reasoning, creativity, problem-solving, planning, and even emotional understanding.

A growing chorus of the world’s most influential figures, from AI pioneers Geoffrey Hinton and Yoshua Bengio to billionaires Steve Wozniak and Richard Branson, and even Prince Harry and Meghan Markle are calling for a global ban on the development of “AI superintelligence.”

The call, issued through an open letter organized by the Future of Life Institute (FLI), marks a rare show of unity across science, business, and politics and stems from fears that humanity may be on the brink of creating a technology it cannot control.

What is AI Superintelligence?

Artificial Superintelligence (ASI) refers to a theoretical form of AI that would surpass human intelligence across virtually all domains — reasoning, creativity, problem-solving, planning and even emotional understanding. Unlike today’s “narrow AI” such as ChatGPT, which excels at specific tasks or even the more ambitious Artificial General Intelligence (AGI), superintelligence would be capable of self-improvement, learning at exponential speeds and outperforming human cognition altogether.

In essence, it would be an AI system smarter than the smartest human on Earth. The concept, first popularized by philosopher Nick Bostrom in his book Superintelligence: Paths, Dangers, Strategies, envisions a moment where AI achieves runaway growth, improving itself faster than humans can comprehend or intervene.

Why are Billionaires and Business Leaders Worried?

The open letter, signed by over 850 leaders including Apple’s Steve Wozniak, Virgin’s Richard Branson, and AI “godfathers” Geoffrey Hinton and Yoshua Bengio, argues that unchecked pursuit of superintelligence could threaten civilization itself.

Their warning is stark as superintelligence could lead to “human economic obsolescence, loss of freedom and civil liberties, and even potential human extinction.” The concern is not just about rogue machines, but also about how humans might use or misuse such power.

1. Existential Risk

Once a superintelligent system exists, it could act autonomously and unpredictably. If its objectives diverge from human interests, even slightly, it could make decisions with irreversible consequences. Geoffrey Hinton, often called the “Godfather of AI,” left Google last year to speak freely about these dangers, warning that humanity could “lose control over intelligent systems.”

2. Economic and Social Disruption

Superintelligence could replace not just jobs but entire industries, making human labor obsolete. Tech leaders fear a world where wealth and power concentrate around those who control AI systems, deepening inequality and eroding democratic institutions.

3. National Security and Weaponization

AI systems capable of strategic reasoning could be deployed for cyberwarfare, disinformation, or autonomous weapons with nations entering an arms race for dominance in artificial intelligence.

4. Lack of Regulation and Oversight

Despite these risks, there are currently no international frameworks governing the development of superintelligent systems. With companies like OpenAI and Meta openly pursuing “digital superintelligence,” the FLI signatories warn that market competition could override caution.

The FLI’s 30-word statement is concise but powerful. “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

Rather than a permanent ban, experts like Professor Stuart Russell of UC Berkeley describe it as a moratorium until humanity understands the implications. “This is not a ban in the usual sense,” he said. “It’s simply a proposal to require adequate safety measures for a technology that could cause human extinction.”

Prince Harry calls it, “The true test of progress will be not how fast we move, but how wisely we steer.”

First Published onOct 22, 2025 5:10 PM

SPOTLIGHT

DigitalFrom Clutter to Clarity: How Video is transforming B2B storytelling

According to LinkedIn’s research with over 1,700 B2B tech buyers, video storytelling has emerged as the most trusted, engaging, and effective format for B2B marketers. But what’s driving this shift towards video in B2B? (Image Source: Unsplash)

Read More

Arattai App: All you need to know about Zoho’s made-in-India "WhatsApp killer"

Discover Arattai, Zoho’s made-in-India messaging app. Features, privacy, user growth, and how it compares to WhatsApp in 2025.