Even after facing warnings from the US government, OpenAI seems to be breaking safety barriers. As reported by the Washington post, three anonymous employees of OpenAI claimed that the company rushed a ‘safety test’ to create GPT4-o.
The report further highlighted that in order to meet the deadline of the launch date of GPT4-o, it did not go through the new testing protocol.
The loopholes
The new testing protocol was designed to “prevent the AI system from causing catastrophic harm, to meet a May launch date set by OpenAI’s leaders.” However, OpenAI rushed these protocols and threw a launch party before May.
The new protocols were made to ensure the AI models do not provide harmful information such as how to build chemical, biological, radiological, and nuclear (CBRN) weapons or assist in carrying out cyberattacks.
Reportedly, three anonymous OpenAI employees had written a letter mentioning that OpenAI failed to fulfill the safety nets. One employee said “OpenAI planned the launch after-party prior to knowing if it was safe to launch.”
The letter has called for government intervention and regulatory mechanisms, as well as strong whistleblower protections to be offered by the employers. In addition to this, two of the three godfathers of AI, Geoffrey Hinton and Yoshua Bengio have also endorsed the open letter.
However, in response to the allegations, Lindsey Held, OpenAI spokesperson, mentioned that the company ‘didn’t cut corners’ on its safety process. He further claimed that OpenAI has scheduled human evaluators in different cities to be ready to run tests, a process that ‘costs hundreds of thousands of dollars.’
The safety road ahead
According to sources this is not the first time OpenAI employees have rushed safety and security protocols at the company. Last month, several former and current staff of OpenAI and Google DeepMind had signed an open letter expressing concerns over the lack of oversight in building new AI systems that can pose major risks.
Former OpenAI employee William Saunders had also shared security concerns in an interview with Alex Kantrowitz. Saunders compared OpenAI’s current trajectory to that of the Titanic, where Titanic had come out of a competitive race between companies to keep building ships.
In May, OpenAI had announced the creation of a new Safety and Security Committee. The committee was tasked to evaluate and further develop the AI firm’s processes and safeguards on “critical safety and security decisions for OpenAI projects and operations.” However, it looks like the committee has failed to do so.
Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook.