By 2030, Gartner predicts guardian agents—AI tools ensuring secure, trustworthy interactions—will make up 10 to 15% of the agentic AI market, aiding tasks, monitoring, and autonomous decision-making.
Guardian agents are AI-based technologies developed to support and secure trustworthy interactions with AI systems. They function both as assistants—helping with content review, monitoring, and analysis—and as evolving semi-autonomous or fully autonomous agents capable of building and executing action plans. These agents can also redirect or block specific actions to keep them aligned with predefined goals.
As the use of agentic AI grows, Gartner highlights the urgent need for strong safeguards. Findings from a Gartner webinar held on May 19, 2025, polling 147 CIOs and IT function leaders, show that 24% of respondents have already deployed a few AI agents, while another 4% have deployed more than a dozen.
At the same time, 50% of respondents said they are still researching and experimenting with the technology. A further 17% said they have not yet worked with AI agents but plan to implement them by the end of 2026. Gartner noted that this expanding adoption of AI agents brings with it an urgent demand for automated trust, risk, and security controls, accelerating the need for guardian agents.
“Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails,” said Avivah Litan, VP distinguished analyst at Gartner. “Guardian agents leverage a broad spectrum of agentic AI capabilities and AI-based, deterministic evaluations to oversee and manage the full range of agent capabilities, balancing runtime decision making with risk management.”
The risks posed by these agents increase as their abilities and reach grow. According to the same poll, 52% of 125 respondents said their AI agents are—or will be—primarily used in internal administrative functions like IT, HR, and accounting. Another 23% are deploying them for external customer-facing roles.
With the rapid expansion of AI agent use cases, threat categories have also increased. Gartner highlighted several examples of emerging threats: credential hijacking and data theft due to unauthorised control; agents interacting with fake or malicious websites, leading to compromised actions; and internal flaws or external triggers causing agents to behave unpredictably, which can result in operational disruptions and reputational harm.
“The rapid acceleration and increasing agency of AI agents necessitates a shift beyond traditional human oversight,” said Litan. “As enterprises move towards complex multi-agent systems that communicate at breakneck speed, humans cannot keep up with the potential for errors and malicious activities. This escalating threat landscape underscores the urgent need for guardian agents, which provide automated oversight, control, and security for AI applications and agents.”
To protect AI interactions, Gartner recommends that CIOs and AI and security leaders should focus on three main functions of guardian agents. These are: reviewers, who assess AI-generated content for accuracy and acceptable use; monitors, who track actions taken by AI and agents for further review; and protectors, who use automated tools to block or modify AI actions during operations.
