,

OpenAI creates Safety and Security Committee

After a tumultuous week with departures of Ilya Sutskever and other key members of its superalignment team responsible for ensuring the development of AI in a safe manner, OpenAI announced a new committee for Safety and Security.

Members are:

Aleksander Madry (Head of Preparedness), the Cadence Design Systems Professor of Computing in the MIT EECS Department and a member of CSAIL, and the MIT Center for Deployable Machine Learning and a Faculty Co-Lead of the MIT AI Policy Forum.

Lilian Weng (Head of Safety Systems)

John Schulman (Head of Alignment Science), “co-founder of OpenAI, and co-leader of “the post-training team, where we fine-tune the models that get deployed in ChatGPT and the OpenAI API.”

Matt Knight (Head of Security), and

Jakub Pachocki (Chief Scientist).

“Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin, a partner at Paul Weiss, co-chair of Paul Weiss’s Cybersecurity & Data Protection practice and chair of the National Security practice.

Leave a Reply


Discover more from Chat GPT Is Eating the World

Subscribe now to keep reading and get access to the full archive.

Continue reading