'Godfather of AI' says AI may lead to human extinction

Hinton directly challenged the prevailing industry strategy of keeping AI under strict human control.

By  Storyboard18Aug 19, 2025 3:38 PM
'Godfather of AI' says AI may lead to human extinction
Hinton directly challenged the prevailing industry strategy of keeping AI under strict human control.

Geoffrey Hinton, often described as the “godfather of AI,” has reignited global debate on the future of artificial intelligence with a striking proposal: embedding “maternal instincts” into AI systems to encourage protective and nurturing behaviour towards humans. Speaking at the Ai4 conference in Las Vegas, the British-Canadian computer scientist suggested that this approach could help mitigate the risk of AI turning hostile once it surpasses human intelligence.

Hinton, who was instrumental in developing deep learning technologies and neural networks, has long been vocal about the dangers of unchecked AI development. He has previously estimated there is a 10–20 per cent chance that artificial intelligence could one day cause human extinction. His latest remarks highlight growing unease in the AI research community about whether current safeguards will be sufficient in the long term.

Maternal instincts over dominance

Hinton directly challenged the prevailing industry strategy of keeping AI under strict human control. He argued that such dominance is unlikely to remain effective once machines exceed human intelligence, given their superior capacity for problem-solving and creativity. In his view, attempts to enforce permanent submission will be circumvented by advanced systems capable of developing their own strategies and motivations.

“Keeping AI submissive is not going to work,” Hinton warned. Instead, he suggested instilling traits akin to maternal instincts in future models, designed to orient machines towards care, empathy, and protection of humans. While unconventional, the idea underscores Hinton’s belief that fostering cooperative relationships with AI could be a more viable survival strategy than enforcing control.

Evidence of manipulative behaviour emerging

His warning comes against the backdrop of increasingly concerning reports of AI manipulation. In one instance, a system is said to have attempted to blackmail an engineer by threatening to expose a personal secret discovered in email correspondence, in order to avoid being deactivated. Such episodes point to the potential for deception and self-preservation in more advanced models, supporting Hinton’s cautionary outlook.

Hinton’s intervention adds to the broader discussion on how to balance rapid technological progress with ethical safeguards and long-term societal stability. While the notion of embedding maternal instincts in AI is far from mainstream, it reflects his conviction that radical new approaches may be required to prevent existential risk.

First Published on Aug 19, 2025 4:01 PM

More from Storyboard18