Software

OpenAI’s new safety board is mostly OpenAI executives

OpenAI has created a committee to assess AI safety risks after several staff resigned in protest.
article cover

Kent Nishimura/Getty Images

less than 3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

ChatGPT developer OpenAI has announced the formation of a “Safety and Security Committee” to address AI’s long-term risks, after two scientists previously responsible for that effort resigned.

The move is unlikely to mollify critics, though: TechCrunch reported that the new committee is a who’s who of company insiders.

In May, co-founder and chief scientist Ilya Sutskever left OpenAI, shortly followed by safety researcher Jan Leike—both of whom helped lead a “superalignment” team responsible for creating ethical and practical safeguards in AI products. OpenAI reportedly disbanded that team, and Leike tweeted shortly after leaving the firm that its “safety culture and processes have taken a backseat to shiny products.” Other employees in policy or governance roles have left in recent months, according to Wired, and Quartz separately tallied a number of other departures among safety staff.

According to TechCrunch, the new committee includes OpenAI CEO Sam Altman, as well as three board members (Bret Taylor, Adam D’Angelo, and Nicole Seligman). It also includes chief scientist Jakub Pachocki and a number of other OpenAI executives who lead its teams on alignment science, preparedness, safety systems, and security.

OpenAI also said it would include third-party “safety, security, and technical experts” to assist the committee, though TechCrunch reported it has only disclosed the involvement of cybersecurity expert Rob Joyce and former Department of Justice official John Carlin.

OpenAI “hasn’t detailed the size or makeup of this outside expert group—nor has it shed light on the limits of the group’s power and influence over the committee,” TechCrunch reported.

OpenAI didn’t immediately respond to IT Brew’s request for comment.

According to the company announcement, the committee will have 90 days “to evaluate and further develop OpenAI’s processes and safeguards” while the company continues to train the next generation of its AI models. OpenAI also said the group will share its findings with the board, after which it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

“We welcome a robust debate at this important moment,” the post stated.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.