OpenAI forms a new safety team led by CEO Sam Altman and announces it’s testing a new AI model (maybe GPT-5)

OpenAI Assembles New Safety Team

In the dynamic landscape of artificial intelligence, OpenAI has taken a definitive step towards fortifying the security and safety of its technological ventures. Following the dissolution of its superalignment team, a new safety team has been inaugurated to steer the company’s commitment to these critical concerns. This team is poised to ensure that OpenAI’s innovations align with essential safety and security benchmarks.

The genesis of the new safety team can be traced back to the departure of several key figures from OpenAI, which included the members of the erstwhile superalignment team. These departures occurred amidst discussions on product direction—specifically, the balance between developing cutting-edge AI models and implementing robust safety measures. The new team is spearheaded by CEO Sam Altman and includes board members Adam D’Angelo and Nicole Seligman, who bring a wealth of experience to the table.

Tasked with the critical responsibility of evaluating and refining OpenAI’s processes, the team will also articulate their findings and recommendations to the board. This collaborative effort aims to integrate safety insights into the company’s operational framework effectively.

Alongside these organizational shifts, OpenAI has also indicated that it is in the testing phase of a new AI model. While details remain under wraps, the anticipation is palpable within the tech community, as enthusiasts and professionals alike speculate on the potential advancements the model could bring.

A Shift in Focus?

The reorganization within OpenAI comes at a pivotal moment, shortly after the launch of its GPT-4o model, which boasts enhanced reasoning capabilities. The transition has been marked by the exit of influential leaders, including Jan Leike, the former head of the alignment team. Leike, who had initially joined OpenAI with the aspiration to pioneer research in AI control and direction, found himself at odds with the company’s prioritization strategies, especially concerning safety and product development.

Leike’s departure was not solitary but part of a trend that suggested a broader concern among some employees about the company’s direction. These sentiments echoed the tension between the pursuit of breakthrough products and the cultivation of a robust safety culture within the organization.

Nonetheless, the formation of the new safety team is a testament to OpenAI’s resolve to recalibrate and place a renewed emphasis on safety protocols. This move is reflective of a maturing industry that is increasingly recognizing the importance of safety and ethical considerations in the realm of AI development.

Moreover, Jan Leike has found a new avenue to pursue his dedication to the superalignment mission. He has joined Anthropic AI, where he will continue to contribute to the field with a focus on scalable oversight and alignment research, signaling ongoing efforts across the industry to ensure AI’s beneficial alignment with human values and societal needs.

Winsage