By Siddharth Pai, Technology consultant and venture capitalist
In the span of a few years, what was once considered a novel productivity aid for programmers has evolved into something far more consequential—a generative artificial intelligence (GenAI) system capable of producing real, production-grade code. The leaders of some of the world’s largest software companies have publicly acknowledged this transformation. Microsoft’s CEO has noted that between 20-30% of the code in certain parts of the company’s repositories is now generated by AI tools. Google’s leadership has shared similar estimates, placing AI-generated code at over a quarter of its new codebase. Meta has gone even further, suggesting that within a relatively short time, generative systems could be responsible for half of its coding output, particularly in areas connected to its language model development. Industry observers and CTOs now forecast that by the end of this decade, a significant majority of the code written inside large engineering organisations may originate from or be heavily shaped by AI systems.
These figures are not just technical trivia. They represent a tectonic shift in how software is written and by whom. In place of painstakingly crafted logic by human developers, much of today’s code is now co-authored by AI. But this shift is not a simple story of automation. It introduces an entire new layer of engineering complexity—one that demands oversight, correction, and contextual intelligence. As GenAI takes over more of the initial coding effort, a new class of coders is emerging—engineers whose primary job is not to write code from scratch, but to vet, validate, and refine what the AI has produced.
To understand why this role is necessary, one must appreciate the difference between producing code and ensuring its correctness. A defining principle of reliable software is determinism—the property that a program, when given the same inputs under the same conditions, will always produce the same outputs. Deterministic behaviour is foundational to trust in computing. It enables engineers to debug systematically, allows systems to be tested exhaustively, and supports compliance regimes in regulated industries. Determinism ensures that software systems are predictable and transparent, qualities that are critical when the software is responsible for financial transactions, healthcare data, or mission-critical infrastructure.
GenAI, however, operates on a different logic. It is probabilistic, not deterministic. These models generate code by learning from vast libraries of text and code, identifying statistical patterns, and predicting likely next pieces of code based on prompt input. This is not based on logical, deterministic code, which is how a human would produce code. The result is code that often works but lacks guarantees. Even small changes in prompts or random seeds can result in different code, and the models themselves have no internal concept of correctness. They cannot reason logically or understand program intent in the way a human does. The result is a body of code that can look polished and syntactically valid, but which may contain semantic errors, edge-case vulnerabilities, performance bottlenecks, or security flaws. Worse, these flaws may not be visible until the code is deployed in the real world, operating under unforeseen conditions.
This is where the new coder enters the scene. Instead of writing every function by hand, this engineer works to ensure that what the AI generates actually does what it is supposed to do—and nothing else. This involves more than testing individual functions. It requires writing comprehensive test suites, performing code reviews with an eye for logical consistency, and integrating formal methods where needed. These methods might include model checking, symbolic execution, and even mathematical proofs, especially for systems where correctness must be guaranteed under all conditions. In many cases, this work will be more intellectually demanding than writing the original code, because it involves understanding both the output and the limitations of the generative model that produced it.
Auditability is another major concern. In many industries, particularly those involving safety, regulation, or public accountability, code must be traceable. When a human writes code, the intent behind each decision can be captured in design documents, commit logs, or in-line comments. When code is generated by an AI system, that trail becomes harder to reconstruct. To maintain auditability, engineers will need to record the full context of code generation—what model was used, with what prompt, under what configuration, and with what subsequent modifications. Without this kind of provenance, debugging and liability attribution become nearly impossible, and regulatory compliance may be jeopardised.
The growing reliance on GenAI in coding also raises strategic implications for software engineering organisations. For firms that have historically built their business models on labour-intensive coding work, the automation of that labour poses an existential question. Indian IT services firms, in particular, are at a crossroads. The category of low-level, repetitive coding work—which once formed a significant part of their export revenue—is rapidly being absorbed by AI. Competing on volume or cost efficiency in this domain will become increasingly unsustainable.
But this does not mean obsolescence. On the contrary, Indian IT services firms have a timely opportunity to reposition themselves as specialists in AI code governance, reliability engineering, and software validation. By investing in capabilities around deterministic verification, prompt engineering, model auditing, and compliance tooling, they can deliver higher-value services that sit atop the AI-generated code layer. These firms can also build integrated platforms that combine GenAI with verification tools, enabling clients to not just generate code quickly, but to also do so in a way that is provably correct and auditable.
The companies that succeed in this new environment will be those that recognise that code, by itself, is no longer the product. Trustworthy code—understandable, verifiable, repeatable—is the new deliverable. Indian engineering talent, with its scale, systems expertise, and ability to adapt quickly to global shifts, is well-positioned to play a leadership role in this transition. But it will require a shift in mindset—from code as craft to code as consequence, where the value lies not in how quickly software is written, but in how rigorously it is made to work.

