Proximity of global AI regulation: Assessing AI safety measures

When designing AI systems, it is essential to consider different cultural norms

Organisations need to adopt best practices that can safeguard them
Organisations need to adopt best practices that can safeguard them

By Sanjay Kukreja

While Artificial Intelligence (AI) was merely a buzzword a few years ago, a spate of AI technologies are now transforming various industries and reshaping the future of work. In fact, AI has been instrumental in improving quality control standards, introducing more accurate forecasting techniques, facilitating personalized consumer experiences and is even helping to create highly realistic content through the use of Generative AI (GenAI). However, despite the huge benefits offered by AI, there are real risks and unintended consequences that internet users and organisations need to contend with. These include the usage of AI platforms to violate privacy rights, spread misinformation, engage in fraud and other forms of disruption that can severely undermine societal structure. Stanford University’s Center for Research on Foundation Models launched a new index that tracks the transparency of 10 major AI companies, including OpenAI, Google, and Anthropic. The researchers graded each company’s flagship model based on whether its developers publicly disclosed 100 different pieces of information—such as what data it was trained on, the wages paid to the data and content-moderation workers who were involved in its development, and when the model should not be used. One point was awarded for each disclosure. Among the 10 companies, the highest-scoring barely got more than 50 out of the 100 possible points; the average is 37. Every company, in other words, gets a resounding F. Thus, for organizations adopting AI, the challenge lies in fostering innovation while also incorporating safety measures into their systems. In this context, let us look at design principles that can help companies achieve this balance, even as governments across the globe inch closer towards setting up a common AI regulatory framework.
Building safety into AI systems with rigorous testing
Like with any revolutionary technology, AI-powered tools can be misused in the hands of bad actors, necessitating that developers work towards integrating safety features into AI systems. This involves conducting rigorous testing on AI models and subjecting them to safety evaluations before releasing them on the internet. As a result, it isn’t unusual for firms to spend as much as six months at the testing stage, working to iron out any flaws in the output model and improving its behaviour by utilizing reinforcement learning with human feedback. Additionally, it is very important to incorporate age verification controls in AI tools, mainly to protect children from consuming or generating hateful, violent or adult content. Ensuring that AI models do not respond to or create unlawful content is thus extremely important, both from a safety and societal perspective. Supplementing these efforts with a robust system that monitors, reviews and reports abuse to the concerned authorities is highly recommended.
Learning from real-world use to improve internal safeguards
While developers make all efforts to mitigate safety risks before deployment, there is a high possibility that users may discover a loophole that allows them to misuse an AI tool or platform. To counter this probability, it is imperative that organizations learn from real-world use and constantly upgrade their systems with additional safeguards in place. In the same vein, opting to launch amongst a select group of users and then gradually releasing the AI tool to a broadening group is advisable. Every misuse should be used for analysis and companies should strive to build mitigations that can prevent a reoccurrence in the future. Thus, learning from real-world use can help develop a robust internal policy framework that helps users explore the benefits of AI technology, without falling prey to risky behaviour that is largely imminent across the internet.
Greater private sector collaboration to promote safe and ethical AI
When designing AI systems, it is essential to consider different cultural norms as well as existing legislations in order to safeguard users from any potential harm. At the same time, any regulation or internal safeguards should be able to maintain utmost transparency, respect human rights and promote sustainability. In order to balance these divergent requirements, some players have adopted a risk-based approach, wherein compliance obligations are commensurate with the level of risk. On the other hand, some AI firms have resorted to sector-specific rules, while largely remaining sector agnostic in terms of the safety measures employed. Recognizing the fact that each approach could be adopted to thwart different risks and challenges, it is important that policymakers collaborate more extensively with private sector participants to achieve the central vision of safe AI deployment.
First global agreement on the need for AI regulation
With more than 100 attendees, ranging from the world’s leading tech companies to key government officials, convening for the world’s first AI Safety Summit at Bletchley Park on 1st & 2nd November 2023; there has finally been an official consensus on the need for AI regulation. This resulted in the signing of the Bletchley Declaration, becoming the world’s first global agreement on AI regulation, a day after the US announced an executive order that showcased how it planned to regulate AI. Similarly, the G7 group of countries underscored the importance of regulating AI through a joint statement, hinting at a global collaborative effort being in the offing. While it remains to be seen as to how soon can a common regulatory framework be adopted, it is amply clear that organizations ought to take the lead and implement safeguards amply demonstrated by leading private players in the AI technology domain.
Responsible AI/Explainable AI
The swift pace of generative AI adoption underscores the urgent need for each organization to have a robust regime in place that caters to responsible AI compliance. Incorporating mechanisms at the design stage for evaluating potential risks associated with the use of generative AI, this framework should also facilitate the integration of responsible AI practices throughout the entire organization.
The principles guiding an organization’s approach to responsible AI should be delineated and championed from the highest levels of leadership, then translated into an efficient governance structure that handles risk management and ensures compliance with both internal principles, policies and the applicable laws and regulations.

The author is global head of technology,eClerx

Follow us on TwitterFacebookLinkedIn

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on January seven, twenty twenty-four, at zero minutes past six in the evening.
Market Data
Market Data