ADVERTISEMENT
Artificial intelligence pioneer Geoffrey Hinton, widely celebrated as the "Godfather of AI" and a 2024 Nobel Laureate in Physics, has issued a stark warning about the unchecked and rapidly accelerating development of AI, directly accusing major technology companies of publicly minimizing its profound dangers. Speaking on the "One Decision" podcast, Hinton asserted that while many corporate leaders are acutely aware of these risks, they are actively avoiding meaningful action.
"Many of the people in big companies, I think, are downplaying the risk publicly," Hinton stated, adding that this public stance often contradicts their private understanding of the perils. He specifically highlighted Demis Hassabis, CEO of Google DeepMind, as a rare exception, commending him as one of the few who "really do understand the risks and really want to do something about it."
Read More:The AI checkmate: Google's licensing push tests the limits of tech–publisher power dynamics
Hinton, whose groundbreaking research on artificial neural networks paved the way for the current AI revolution, expressed deep concern over the unprecedented pace of advancement. "The rate at which they’ve started working now is way beyond what anybody expected," he lamented, emphasizing that advanced AI systems are now learning in ways humans don't fully comprehend. He also voiced a personal regret, admitting, "I should have realized much sooner what the eventual dangers were going to be. I always thought the future was far off and I wish I had thought about safety sooner."
Hinton Clarifies Google Exit, Cites Freedom to Speak Hinton's departure from Google in 2023, after more than a decade with the tech giant, was widely perceived as a protest against its aggressive AI push. However, he used the podcast as a platform to clarify this narrative, dismissing it as an exaggeration.
"There’s a wonderful story that the media loves this honest scientist who wanted to tell the truth so I had to leave Google. It’s a myth," Hinton explained. "I left Google because I was 75 and I couldn’t program effectively anymore but when I left, maybe I could talk about all these risks more freely." He further elaborated on the constraints of corporate employment, stating, "You can’t take their money and then not be influenced by what’s in their own interest," suggesting that staying would have necessitated a degree of self-censorship.
Hinton's praise for Demis Hassabis underscores a critical divide within the AI industry regarding safety. Hassabis, who sold DeepMind to Google in 2014 and now leads its prominent AI research division, has consistently voiced concerns about the potential misuse of advanced AI systems.
In a prior interview with CNN, Hassabis acknowledged his worries about AI, clarifying that his primary concern isn't job displacement but rather the catastrophic possibility of the technology falling into the wrong hands. "A bad actor could repurpose those same technologies for a harmful end," Hassabis told CNN's Anna Stewart. "And so one big thing is how do we restrict access to these systems, powerful systems, to bad actors but enable good actors to do many, many amazing things with it?" His stance resonates with Hinton's urgent call for greater accountability and proactive measures to mitigate AI's burgeoning risks.