Artificial intelligence is advancing quickly, but concerns around its safety are also increasing. Amid ongoing debates, OpenAI has announced a new Safety Fellowship Programme aimed at supporting researchers working to reduce risks linked to advanced AI models. The initiative focuses on independent research in areas such as system reliability, misuse prevention, and ethical AI development.
What is the OpenAI Safety Fellowship?
The OpenAI Safety Fellowship is a research-focused programme for individuals interested in understanding how AI systems behave and how associated risks can be managed. The fellowship will run from September 14, 2026, to February 5, 2027, allowing selected participants to work on independent AI safety projects.
Participants are expected to produce a substantial research output, such as a paper, dataset, or benchmark. The work is conducted using external tools like open-source models and public APIs rather than internal systems. This ensures a practical approach while maintaining research independence.
The course module involves studying AI behaviour under different conditions to identify weaknesses. The aim is to improve monitoring, evaluation, and control systems so that AI models remain predictable and aligned with expected outcomes. This includes developing tools to test responses, reduce harmful outputs, and ensure consistent performance. Fellows also receive funding, compute resources, and technical guidance.
Who can apply for the fellowship?
The programme is open to researchers, engineers, and professionals from technical backgrounds interested in AI safety. Applicants should have strong programming skills, particularly in Python, along with problem-solving ability.
A PhD or prior machine learning experience is not mandatory. Candidates from fields such as mathematics, physics, computer science, and cybersecurity can apply if they have strong analytical skills. The fellowship is suited for individuals looking to move into full-time AI safety research and those comfortable working in fast-paced, uncertain environments.
How to apply and why it matters
Applications for the fellowship are currently open and will close on May 3. Interested candidates need to complete an online application form. OpenAI will review submissions and inform selected candidates by July 25.
The programme reflects a growing focus on AI safety as systems become more advanced. By supporting independent researchers, OpenAI aims to encourage collaboration and develop solutions that improve the reliability and safe use of artificial intelligence technologies.
