Sam Altman steps down from safety committee as OpenAI faces scrutiny

Former researchers have accused Altman of prioritizing OpenAI's corporate interests over genuine AI regulation.

By  Storyboard18Sep 17, 2024 9:59 AM
Sam Altman steps down from safety committee as OpenAI faces scrutiny
OpenAI has announced that its CEO, Sam Altman, is stepping down from the Safety and Security Committee, an internal group established to oversee critical safety decisions for the company's projects. (Image source: Moneycontrol)

OpenAI has announced that its CEO, Sam Altman, is stepping down from the Safety and Security Committee, an internal group established to oversee critical safety decisions for the company's projects. The committee will now function as an independent board oversight group, chaired by Carnegie Mellon professor Zico Kolter. Other members include Quora CEO Adam D'Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman, all of whom are also on OpenAI's board of directors, as per reports.

OpenAI's Safety and Security Committee, which conducted a safety review of its latest AI model, o1, after Sam Altman's departure, will continue to function independently. The committee will receive regular updates from OpenAI's safety and security teams and retains the authority to delay model releases until safety concerns are fully addressed, as per reports.

OpenAI's Safety and Security Committee will continue to receive regular updates on the technical assessments of current and future AI models, as well as ongoing post-release monitoring. The company is also implementing a new safety and security framework with specific success criteria for launching models.

Sam Altman's decision to step down from the Safety and Security Committee follows concerns raised by five U.S. senators in a letter to him this summer. Additionally, many OpenAI staff members who previously focused on AI safety have left the company, and former researchers have accused Altman of prioritizing OpenAI's corporate interests over genuine AI regulation.

In an op-ed for The Economist, former OpenAI board members Helen Toner and Tasha McCauley expressed concerns about the company's ability to hold itself accountable, citing the potential influence of profit incentives. With rumors of a new funding round that could value OpenAI at over $150 billion, the company's profit motives may be further amplified. To secure this funding, OpenAI might abandon its hybrid nonprofit structure, potentially compromising its commitment to developing artificial general intelligence that benefits all of humanity.

First Published on Sep 17, 2024 9:59 AM

More from Storyboard18

How it Works

Gig workers face exploitation, only Urban Company, bigbasket ensure minimum wage: Report

Gig workers face exploitation, only Urban Company, bigbasket ensure minimum wage: Report

How it Works

Broadcasters move TDSAT against TRAI regulation after Kerala HC dismisses the plea

Broadcasters move TDSAT against TRAI regulation after Kerala HC dismisses the plea

How it Works

Bharti Airtel in advanced talks to acquire Tata Play

Bharti Airtel in advanced talks to acquire Tata Play

How it Works

Women in marketing roles have come far, but still have a long way to go

Women in marketing roles have come far, but still have a long way to go

How it Works

DD Sports, FanCode to broadcast bilateral hockey series between India and Germany

DD Sports, FanCode to broadcast bilateral hockey series between India and Germany

How it Works

Kalyan Jewellers records 37% growth for Q2 FY2025

Kalyan Jewellers records 37% growth for Q2 FY2025

How it Works

"No assets in UAE, no attachment" responds Honasa Consumer over Dubai Court order

"No assets in UAE, no attachment" responds Honasa Consumer over Dubai Court order

How it Works

Sony Pictures Networks India's ad revenue decline by 11% to reach Rs 2,912 crore in FY24

Sony Pictures Networks India's ad revenue decline by 11% to reach Rs 2,912 crore in FY24