By Samir Kumar Mishra
The world is in the middle of an unprecedented era of AI innovation and organisations today need to be prepared to defend against the growing safety and security concerns the technology can introduce. We need to fundamentally reimagine AI safety and security. Recognising this is critical and enterprises need a common layer of safety and security that protects every user and every application in the enterprise.
We’re already in a multi-model, multi-cloud world, and companies will soon start to deploy AI agents and apps in a big way. However, many companies are grappling with the challenges of deploying AI in a way that is both effective and responsible. Without the proper safeguards, AI models are susceptible to attacks and undesired outputs.
Organisations must adopt a security-first mindset, ensuring that AI systems are built on a foundation of trust and resilience. By embedding security from the beginning, fostering collaboration and adhering to ethical frameworks, enterprises can develop AI systems that are both fast and intelligent, as well as secure and trustworthy. This includes rigorous validation processes to discover vulnerabilities, continuous testing to identify weaknesses, and building guardrails to protect the company and its customers.
As AI adoption accelerates, ensuring its security cannot be the responsibility of a single organisation or sector. The complexity of AI-driven systems demands a collaborative approach, where enterprises, researchers, and policymakers work together to establish standardised security frameworks and share critical knowledge. Companies can standardise security frameworks through:
Clear governance models: Organisations must implement and operationalise defined AI frameworks that prioritise safety, transparency, fairness, and accountability. This often includes compliance with leading AI security standards and regulations. Integrating such AI principles from the initial design phase ensures responsible deployment across all systems.
Continuous evaluation: GenAI models are non-deterministic and should be continuously evaluated for susceptibility to safety and security risks. Regular testing and independent assessments help organisations detect vulnerabilities, validate security measures, and maintain compliance with evolving regulations.
Enhanced collaboration: Knowledge sharing remains essential to combat emerging threats. Enterprises should engage in industry forums, research partnerships, and cross sector initiatives to exchange insights on security practices.
Workforce training and awareness: Organisations should invest in ongoing training programmes that equip developers, engineers, and leaders with skills to identify biases and implement security best practices.
As AI continues to evolve, so do the risks associated with it. A proactive approach to AI defence that embeds security at every stage, fosters collaboration, and upholds ethical standards is essential to ensuring safe and responsible AI growth.
The writer is director, Security Business, Cisco India & Saarc.
Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.