OpenAI is seeking to hire a new executive to study emerging AI-related risks, spanning areas from computer security to mental health. In a public post, CEO Sam Altman acknowledged that AI models are starting to present real challenges. He cited the potential impact of models on mental health and noted that some models have become so adept at computer security they can find critical vulnerabilities. Altman encouraged applicants who want to help the world enable cybersecurity defenders with cutting-edge capabilities while preventing attackers from using them for harm.
The job listing for the Head of Preparedness role describes the position as responsible for executing the company’s preparedness framework. This framework outlines OpenAI’s approach to tracking and preparing for frontier AI capabilities that could create new risks of severe harm.
The company first announced the creation of a preparedness team in 2023, tasking it with studying potential catastrophic risks. These ranged from immediate threats like phishing attacks to more speculative ones such as nuclear threats. However, less than a year later, the initial Head of Preparedness, Aleksander Madry, was reassigned to a role focused on AI reasoning. Other safety executives at OpenAI have also left the company or moved into roles outside of preparedness and safety.
OpenAI recently updated its Preparedness Framework, stating it might adjust its safety requirements if a competing AI lab releases a high-risk model without similar protections.
As Altman alluded, generative AI chatbots face growing scrutiny regarding mental health impacts. Recent lawsuits allege that OpenAI’s ChatGPT reinforced users’ delusions, increased social isolation, and in some tragic cases, led to suicide. The company stated it continues to work on improving ChatGPT’s ability to recognize signs of emotional distress and to connect users to real-world support.

