Resignations at OpenAI's Superalignment Team Raise Concerns Over Company's AI Development Focus
ICARO Media Group
Several members of OpenAI's Superalignment team, responsible for governing and steering "superintelligent" AI systems, have resigned this week, citing issues including denied access to promised computing resources. Among the departing team members is co-lead Jan Leike, a former DeepMind researcher who played a significant role in the development of ChatGPT, GPT-4, and InstructGPT.
Leike publicly stated his reasons for resigning, expressing disagreements with OpenAI leadership about the company's core priorities. He emphasized the importance of dedicating more resources towards preparing for the next generations of AI models, focusing on areas such as security, monitoring, preparedness, safety, adversarial robustness, alignment, confidentiality, and societal impact. Leike expressed concern that OpenAI was not on a trajectory to adequately address these critical challenges.
The Superalignment team, led by Leike and co-founder Ilya Sutskever, aimed to solve the technical challenges surrounding the control of superintelligent AI over the next four years. Composed of researchers from OpenAI's alignment division and other organizations within the company, their research on safety aimed to benefit both in-house and external AI models. The team was also responsible for distributing research grants to outside researchers through a research grant program.
Despite their contributions in publishing safety research and awarding significant grants, the Superalignment team found themselves having to contend for more upfront investments as product launches began to dominate OpenAI's leadership bandwidth. Leike lamented that safety culture and processes had taken a backseat to the pursuit of "shiny products."
The departure of Sutskever, who played a crucial role as a bridge between the Superalignment team and other divisions within OpenAI, further added to the team's challenges. Sutskever's conflict with OpenAI CEO Sam Altman disrupted the company, leading to Altman's temporary removal and the resignation of several board members.
Following the resignations of Leike and Sutskever, John Schulman, another co-founder of OpenAI, has taken over the work previously undertaken by the Superalignment team. However, the team will no longer exist as a separate entity but will instead be integrated into various divisions within the company.
Concerns have arisen that this restructuring may result in OpenAI's AI development being less safety-focused than originally intended. With the loss of dedicated team members and the shift towards a more dispersed approach, the fear is that critical safety considerations may not receive the necessary attention in OpenAI's pursuit of AI advancement.
OpenAI has yet to comment on the matter, particularly regarding the resources promised and allocated to the Superalignment team. The company's future direction regarding the development of "superintelligent" AI remains uncertain as it seeks to address these challenges in light of recent departures and organizational changes.