OpenAI's Chief Scientist Resigns, Citing Concerns About Company's 'Safety-Focused Culture'

OpenAI's Chief Scientist Ilya Sutskever Resigns, Citing Safety Concerns

On Monday, OpenAI announced the development of a new product, ChatGPT 4o, alongside the resignation of its co-founder, chief scientist, and former DALL-E leader, Ilya Sutskever. Sutskever also led OpenAI's superalignment research team, which focuses on ensuring artificial intelligence remains beneficial and safe.

Hours after the product announcement, Sam Altman, OpenAI's CEO, tweeted a reference to the movie Her, a sci-fi romantic comedy in which a man falls in love with an AI assistant. Continuing the theme, Altman wrote: "Her, but open."

Despite the excitement around ChatGPT 4o, news of Sutskever's departure overshadowed it. Speculation about the reasons for his resignation began almost immediately. Given Sutskever's past involvement in the boardroom revolt leading to Altman's brief firing, why did he leave now? What is the reason for the silence of other prominent members of OpenAI's safety team who have recently left?

Former employees are restricted from criticizing OpenAI by extremely restrictive off-boarding agreements, which also forbid them from acknowledging the existence of the NDA itself. Former employees who decline to sign the document, or who violate it, risk losing all vested equity earned during their time at the company, which is likely worth millions of dollars.

Days after Sutskever's resignation, he published a statement emphasizing his confidence in OpenAI's future benefits and excitement for what is to come. In contrast, Jan Leike, Sutskever's co-leader of the superalignment team, simply tweeted, "I resigned."

On Friday, Leike expanded on his resignation in a Twitter thread. He expressed concern that OpenAI had moved away from a safety-focused culture and was reluctant to criticize the company because of the highly restrictive NDA.

OpenAI's statement to the author claims: "We have never canceled any current or former employee's vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit." After initial publication, Altman tweeted: "Sorry, my tweet was not intended to be a announcement or statement about specifics."

This story was originally published in the Future Perfect newsletter and has been updated to reflect OpenAI's response after initial publication.

OpenAI has become a colossal entity in the race to dominate AI, with astounding financial incentives. The company has positioned itself as responsible, aiming to transcend commercial incentives and allow external oversight. However, the restrictive NDAs former employees must sign cast doubt on this intention.

OpenAI's contradictory nature makes it a perplexing company for those who value the safe and beneficial implementation of AI. The company's leadership says it wants to transform the world but also welcomes the world's input on how to do so wisely and justly. Still, when substantial financial incentives are involved, it becomes clear that OpenAI does not intend to involve the world.

The resignation of OpenAI's senior safety team members raises concerns about the company's future direction and calls into question its commitment to transparency and accountability.

Read more