By Anil Nair

For many clued-in on technology directions, Gartner’s top strategic technology trends are a signpost. Not surprisingly, the very first trend they talk about for 2025 is agentic artificial intelligence (AI). In their words, “Agentic AI has the potential to perform as a highly competent teammate by providing insights from derivative events that are often not visible to human teammates.”

To elaborate, with the ubiquitous use of generative AI, the first level was singular querying, wherein natural language processing (NLP) is used for replies. Now, we are seeing the shift to the next level — of solving more complex problems autonomously. And this, aka agentic AI, involves deep, iterative reasoning and multiple steps.

For instance, a major Indian bank has announced its intent to use agentic AI for customer service automation. The system, going beyond straightforward question answering, could potentially check the outstanding balance and recommend which accounts to pay off, and then complete transactions based on the client’s response. Or it could be relating to autonomous fraud detection, triggered by an unusual login or non-typical behaviours, resulting in an instant customer alert, or quick remediation like freezing the account. This could involve behavioural biometrics and predictive forecasts leveraging financial crime databases.

Agentic AI in logistics could involve tracking inventory, predicting stock levels and automating replenishments, precluding over- or under-stocking. Full-blown, it could include optimised routing, identifying potential disruptions, proactive solutions, smart warehousing, and instant customer updates.

In healthcare, agentic AI could be about matching patient needs and preferences with the availability of medical experts, smart scheduling to minimise wait times, retrieval and analysis of reports, monitoring vitals like heart rate or sugar levels, alerting healthcare providers to prevent catastrophes, and claim processing. Doctors can use agentic AI to analyse vast amounts of data, both medical and patient-related, cull critical information, capture clinical notes, and create custom treatment plans, greatly enhancing efficiency.

The underlying process involves gathering relevant data from a variety of sources, including sensors, the internet, or databases. Then, leveraging a large language model to get specialised models to generate content or recommendations, followed by execution, utilising external tools via APIs (application programming interface). And acting within predefined guardrails, while learning continuously through a feedback loop.

The foundational elements of agentic AI are agents that carry data from past tickets, agents that aggregate data from varied sources, workflow agents that execute across applications with the right APIs in the right sequence to ensure flawless fulfilment, and agents that assist. The strength of the system is the orchestration of these diverse virtual agents, incorporating external ones seamlessly as necessary, enabling innovation and vastly superior outcomes.

Not long ago, hyper-automation was trending. It involves processes like robotic process automation and workflow automation, which are excellent for repetitive, compliance driven, rule-based tasks but struggle with context-dependent, unstructured, evolving scenarios. That’s exactly where agentic AI comes into play, a transformative layer on top of hyper-automation, bringing adaptive decisioning into play.

The demand for such solutions is accompanied by the need for professionals. Leading consulting firms say there are currently fewer than 100,000 agentic AI professionals, whereas the need is double that number by 2026. This includes agentic AI developers, AI framework architects, solution engineers, and system performance testers, not to speak of new titles for emerging needs.

While AI autonomy creates immense opportunity for efficiency and the best outcomes, risks around control and accountability can’t be ignored.

When you look at AI accountability, especially of the agentic kind, the user’s role obviously ends with prompts. AI developers must embed safeguards and ethical principles, and provide for audit oversight. Deploying organisations who may well be considered ultimately responsible must, however, set clear boundary conditions, monitor skillfully, and intervene intelligently and speedily.

Laws and regulations often can’t keep pace with technology development and the subtle changes it brings into play. India has a number of laws like the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023, which will eventually lead to AI-specific legislation — which must directly address bias, discrimination, privacy, misinformation, accountability, and liability concerns. Only then will it drive growth and progress, and prevent weaponisation of a potent technology in a borderless world. Evidently, as AI becomes smarter and more autonomous, humans must remain stewards of their power.

It promises to get even more interesting if and when AI gets recognised as a legal entity with its own rights and responsibilities.

The writer is founder, ThinkStreet.

Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.