OpenAI, the artificial intelligence research laboratory, has announced the formation of a new team called Preparedness, which will be responsible for assessing and evaluating the catastrophic risks posed by advanced AI.
“To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge,” the company announced in a blog post.
Also Read | OpenAI ‘opens’ DALL-E 3 access for these users
The team will be led by Aleksander Madry, a world-renowned AI expert who joined OpenAI in May 2023. Madry is the director of the Center for Deployable Machine Learning at MIT, and he has published extensively on the safety and security of AI systems.
In a blog post announcing the launch of the new team, OpenAI said that the company is “committed to developing AI in a safe and beneficial way.” However, there are also real risks associated with AI which the company takes very seriously.
Also Read | Musk starts charging users Rs 84 per year to like posts on X; know reason why
To reduce the potential dangers as AI models advance, OpenAI is forming a team called Preparedness. This team will closely link the assessment of AI capabilities, evaluations, and internal testing for cutting-edge models. These models range from the ones created by OpenAI in the near future to those with capabilities similar to Artificial General Intelligence (AGI). The team’s main goal is to monitor, assess, predict, and safeguard against significant risks that could encompass various categories.
The Preparedness team will help track, evaluate, forecast and protect against catastrophic risks spanning multiple categories including: Individualised persuasion, Cybersecurity, Chemical, biological, radiological, and nuclear (CBRN) threats and Autonomous replication and adaptation (ARA).
The Preparedness team mission also includes developing and maintaining a Risk-Informed Development Policy (RDP). It’s a set of guidelines and strategies that outline how OpenAI evaluates and monitors advanced AI models. It also includes protective measures and a structure for accountability throughout the development process.
Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook.