Former OpenAI Board Member Accuses CEO Sam Altman of Repeated Dishonesty
ICARO Media Group
In a bombshell interview, former OpenAI non-profit board member Helen Toner has accused CEO Sam Altman of a pattern of dishonest behavior that eroded trust within the organization. Toner, an AI policy expert from Georgetown University, sat on the board until she resigned last year following her role in a failed attempt to oust Altman as CEO.
Toner refuted speculation that technological advancements were the cause of the coup, instead blaming Altman's repeated dishonesty for the breakdown in trust. She claimed that Altman withheld information, misrepresented company happenings, and even outright lied to the board, making it nearly impossible for them to fulfill their roles effectively. According to Toner, even the launch of the highly anticipated ChatGPT, which caused a frenzy in the AI community when it debuted in November 2022, was kept secret from the board.
Altman allegedly downplayed the board's concerns using convenient excuses, which prolonged the period of inaction. Toner highlighted the detrimental effect this had on the board's ability to trust Altman, leading to their decision to fire him. OpenAI did not provide a comment on Toner's accusations when approached by Fortune.
Tensions escalated further when Toner co-published a paper that praised Anthropic's approach to AI safety over OpenAI's. Altman allegedly responded by lying to other board members in an attempt to force Toner off the board, further damaging trust and reinforcing the case for his dismissal.
These accusations from Toner are not an isolated incident. Former senior AI safety researcher Jan Leike and Hollywood actress Scarlett Johansson have also raised concerns about Altman's behavior. Leike criticized management for not delivering promised resources to his AI guardrails team and subsequently joined Anthropic. Johansson accused Altman of potentially using part of her voice without consent for OpenAI's ChatGPT voice bot, a claim the company disputes.
The departure of key AI safety staff members, including Leike, prompted OpenAI to disband the team entirely, consolidating control within Altman and his allies. The efficacy of a management team focused on maximizing financial results overseeing the implementation of AI guardrails remains a topic of debate.
Furthermore, it was revealed that an unusual nondisparagement clause, which could void an employee's vested equity, played a significant role in discouraging employees from speaking out. Former OpenAI safety researcher Daniel Kokotajlo voluntarily sacrificed his equity to avoid being bound by the exit agreement, a move that Altman later acknowledged as an oversight.
Toner's recent remarks follow an op-ed she co-wrote with former OpenAI director Tasha McCauley in The Economist, where they argued against self-regulation in AI companies, citing the inability to withstand profit incentives. The evidence, according to Toner and McCauley, shows that no AI company can be trusted to effectively govern itself.
The allegations made by Toner, Leike, and Johansson raise significant concerns about the leadership and transparency within OpenAI. As the fallout from these revelations continues, the future of the organization and its approach to AI governance remain uncertain.