Elon Musk wonders if OpenAI doing something ‘dangerous to humanity’ could be reason for Sam Altman’s sacking

Elon Musk has stoked fears that OpenAI doing something 'potentially dangerous to humanity' was the reason for Sam Altman’s surprise ouster from the artificial intelligence startup.

By  MoneycontrolNov 21, 2023 9:29 AM
Elon Musk wonders if OpenAI doing something ‘dangerous to humanity’ could be reason for Sam Altman’s sacking
Elon Musk is not alone in raising concerns about the potential dangers of artificial intelligence. The rift between Altman and OpenAI board reflects fundamental differences over safety and the social impact of AI.(Image source: Moneycontrol)

Elon Musk has stoked fears that OpenAI doing something “potentially dangerous to humanity” was the reason for Sam Altman’s surprise ouster from the artificial intelligence startup he co-founded.

Sam Altman, 38, was fired on Friday from the company that created the popular ChatGPT chatbot. The board of OpenAI said he was pushed out after a review found he was not consistently candid in his communications with the board. The board no longer has confidence in his ability to continue leading OpenAI, the company said in a statement that sent shockwaves through the tech industry.

On Tuesday, the chief executive of Tesla asked OpenAI chief scientist and board member Ilya Sutskever to explain why he took such drastic action against Sam Altman. Ilya Sutskever is widely believed to be the man who engineered Altman’s sacking from OpenAI. He seemingly had a change of heart days later as he tweeted: “I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.”

“Why did you take such a drastic action?” Elon Musk asked Sutskever. “If OpenAI is doing something potentially dangerous to humanity, the world needs to know.”

Musk is not alone in raising concerns about the potential dangers of artificial intelligence. The rift between Altman and OpenAI board reflects fundamental differences over safety and the social impact of AI.

On one side are those, like Altman, who view the rapid development and, especially, public deployment of AI as essential to stress-testing and perfecting the technology. On the other side are those who say the safest path forward is to fully develop and test AI in a laboratory first to ensure it is, so to speak, safe for human consumption.

Some caution the hyper-intelligent software could become uncontrollable, leading to catastrophe - a concern among tech workers who follow a social movement called "effective altruism," who believe AI advances should benefit humanity. Among those sharing such fears is OpenAI's Ilya Sutskever.

Sutskever reportedly felt Altman was pushing OpenAI’s software too quickly into users’ hands, potentially compromising safety.

First Published on Nov 21, 2023 9:29 AM

More from Storyboard18

How it Works

TCS yet to decide on salary hikes for employees amid economic uncertainty

TCS yet to decide on salary hikes for employees amid economic uncertainty

How it Works

Nestle offices in France searched by authorities amid bottled water probe

Nestle offices in France searched by authorities amid bottled water probe

How it Works

Elon Musk’s Tesla to launch first-ever experience centre in India on July 15

Elon Musk’s Tesla to launch first-ever experience centre in India on July 15

How it Works

Spoiled milk and rotten promises: Q-comm firms' hygiene lapses threaten FMCG brand ties

Spoiled milk and rotten promises: Q-comm firms' hygiene lapses threaten FMCG brand ties

How it Works

Zee fails to secure shareholder nod for promoter warrant issue, raising questions over funding strategy

Zee fails to secure shareholder nod for promoter warrant issue, raising questions over funding strategy

How it Works

Pocket FM sues Kuku FM parent Mebigo Labs over alleged show title infringement; seeks Rs 85.7 cr in damages

Pocket FM sues Kuku FM parent Mebigo Labs over alleged show title infringement; seeks Rs 85.7 cr in damages

How it Works

TCS attrition rate holds steady at 13.8% in Q1 FY26, headcount stands at 613,069

TCS attrition rate holds steady at 13.8% in Q1 FY26, headcount stands at 613,069

How it Works

Chrome killers? How OpenAI and Perplexity AI browsers could rewire Google’s ad empire

Chrome killers? How OpenAI and Perplexity AI browsers could rewire Google’s ad empire