On a winter afternoon in North Campus, a senior professor at Delhi University (DU) flips through a batch of undergraduate essays by Hindi literature students on her desk. The writing is fluent, the arguments neatly arranged. Nothing is obviously amiss. And yet, she pauses. “Five years ago, my worry was plagiarism,” she says. “Now my worry is intellectual absence.”
The scale at which these changes are unfolding is vast. India’s school education system serves 24.8 crore students across 14.72 lakh schools, taught by 98 lakh teachers, as per government data. Higher education is no smaller. India now has 4.33 crore students enrolled, up from 3.42 crore in 2014-15, with the gross enrolment ratio for the 18-23 age group rising from 23.7% to 28.4%.
In absolute terms, this places India among the largest education systems in the world, one now absorbing artificial intelligence (AI) at scale. Under the National Education Policy 2020, AI is being positioned as a core component of future-ready learning, with an emphasis on innovation and digital literacy.
The Central Board of Secondary Education (CBSE) has already introduced AI as a subject in affiliated schools, first at Class 9 in the 2019-20 academic year, and later extended to Class 11. At the just concluded India AI Impact Summit 2026, Google CEO Sundar Pichai announced the launch of the Google AI Professional Certificate Program.
As part of its broader skilling push, Google will also support Atal Tinkering Labs, under the Atal Innovation Mission, to bring generative AI assistance to over 10,000 schools and 11 million students in India. The initiative aims to integrate robotics and coding into local curricula, embed Gemini into teacher workflows, and develop a “safely guardrailed AI assistant”.
Across Indian classrooms, from schools and colleges to law universities, medical campuses and counselling rooms, AI has quietly entered education. “I asked my students to prepare an assignment on a novel,” the DU professor says.
“Some read the suggested material, presented what they understood, and gave examples as required by the syllabus. But quite a few submitted vague outlines of the topic, exactly the kind of generic response AI tends to generate.” Others, she adds, were worse.
“Some wrote about things that aren’t even in the syllabus, entirely different stories altogether, which ChatGPT must have generated confusing between the stories,” she says. “This is literature. One has to demonstrate understanding. At least you have to read the text before submitting.”
Polished, yet impersonal
Faculty members across disciplines describe a similar unease. Student submissions are often more polished than before. Arguments move smoothly from introduction to conclusion. When questioned orally, some students struggle to explain positions they submitted confidently on paper.
This is particularly visible in legal education, where reasoning is inseparable from facts. Utsav Gireesh, an advocate who recently judged a mock induction moot court competition at a law college in New Delhi, recalls a marked difference from earlier editions.
“This was the second edition of the moot court in the same college, and the third induction moot court competition that I was judging,” he says. “What stood out was a clear lack of engagement with the facts of the case.”
Participants, mostly first-year students, arrived armed with doctrines and principles that had little applicability to the dispute at hand.
“Some had prepared scripts with broad, generalised statements, the kind ChatGPT puts out,” Gireesh says. “Cross-questioning on just the facts showed an evident lack of application of mind.” A single pointed question, he recalls, was often enough to expose the fragility beneath the polished submissions.
Medical education, too, is adjusting, more cautiously, but no less fundamentally. Diagnostic tools, symptom checkers and AI-assisted imaging systems are now familiar to students long before they step into hospitals.
Young trainees consult AI readily, often alongside textbooks and lecture notes. “They are not being careless,” says Dr Amrith Venu, a faculty member at a government medical college in Lucknow. “They are being efficient.”
But efficiency, he cautions, has limits. “When AI becomes the first voice rather than a second opinion, we worry about erosion of confidence. A doctor who learns to trust a system more than their own judgement,” he adds, “is not a safe outcome.”
Some medical colleges have begun explicitly teaching what faculty describe as “AI scepticism”. “When students present AI-supported diagnoses, we ask them: where might the system fail? What can it not see? Why does your judgement still matter?” he says.
Meanwhile, from 2023 onwards, several nations have taken steps limiting AI use in classrooms. Sweden has led the shift by increasing funding for printed textbooks and prioritising handwriting and physical reading after linking heavy digitisation to declining comprehension levels.
Finland, while not restricting AI outright, has issued structured guidelines ensuring AI tools do not replace student thinking or assessment integrity. China tightly regulates AI-driven education platforms, aligning them with state curriculum standards. Italy, too, has imposed data protection safeguards and ethical-use norms following early concerns around generative AI tools.
In the US, regulation varies by district, with schools either restricting AI use in assignments or incorporating AI literacy programmes to ensure responsible use. These approaches reflect a global trend toward regulating AI in education.
Adoption & abdication
On campuses, responses to AI have been pragmatic rather than prohibitive. Toolika Wadhwa, professor in the Department of Education at Shyama Prasad Mukherji College, DU, says resistance is no longer realistic. “My students have been using AI, and we’ve found ways to work around it,” she says. “I encourage ethical use.”
Workshops focus on prompts, cross-checking and disclosure. “Personally, it helps me with routine administrative work,” she says. “It doesn’t always give authentic results, but it’s easier, I just cross-check.” In class, she draws boundaries. “If you’ve used AI to write this assignment, I’ll also use AI to mark it and you have to accept that score,” she tells students.
Her concern is not usage but passivity. “We get assignments that are plagiarised, or have references that aren’t explained,” she says. “AI has changed the nature of studying. It doesn’t challenge them.”
To counter this, she asks students to use AI in class, under supervision. “I tell them to use their phones right now, use prompts in a way you learn something, instead of copying blindly,” she says.
She often illustrates the hidden costs of casual AI use in unexpected ways. “During the Ghibli anime phase, I asked how many students generated images,” she says. “Then I told them how much water and electricity was consumed for ten seconds of fun, and how that affects climate.”
But at the school level, caution runs deeper. Sukanya Mukherjee, principal of Tulip English School (ICSE Board) in Malda, West Bengal, says teachers attend AI training sessions, but questions saturation.
“When we learn one thing, by the time we adopt it, the next thing comes,” she says. “It’s a cycle.” For younger children, she remains unconvinced AI belongs in classrooms. “Their brains are still developing,” she says.
“They lack patience. They are given phones from when they are three years old.” Parents sometimes use AI to help with homework. “But personally, I don’t believe it should be used. Children need to struggle,” she adds. “That’s how cognition grows.”
At the other end of academia, a doctoral researcher in history at Visva-Bharati University, who requested anonymity, offers an unfiltered account of how AI is actually being used inside universities.
“I do think you can use AI, but that obviously comes with a little bit of a caveat… I won’t deny the fact that even I used AI for my thesis but not for the writing of my paper. The trickiest part, which apparently no one is understanding, is that you have to cross-check it. People mistake AI for Google. On a very personal level, it’s not. I asked for ten books that the AI referenced. Only two existed,” he adds.
On undergraduate assessments, his assessment is blunt. “What students do is copy and paste the question into AI and ask it to write a 1,000- or 2,000-word essay.” The broader question is not whether AI should be used. “We are using it regardless.”
The danger lies in timing. “Ultimately, it will kill whatever critical thinking we develop between the ages of 18, 19 or 20.”
AI has a place, he concedes. “If you want to use it to understand a tough, jargon-heavy book, it really helps.” But as a substitute for thinking: “If you’re using it to write your papers for you,” he says, “then lack of a better word, it’s quite stupid.”
Cognitive downfall
If teachers are noticing surface-level changes, psychologists are concerned about deeper cognitive shifts. The speed of adoption helps explain why these concerns are emerging. When OpenAI released ChatGPT in November 2022, the tool reached 5 million users within five days and 100 million monthly users within two months.
By November 2025, the number had risen to an estimated 810 million users globally, many of them students using large language models to brainstorm, tutor themselves, generate assignments, and increasingly, to outsource thinking. The Brookings Institution’s Center for Universal Education in Washington DC undertook a year-long global “premortem” on generative AI in education, asking what risks the technology might pose to how children and young people learn.
At the top of its list was the potential impact on cognitive development. The report describes a feedback loop of dependence, in which students offload more of their thinking onto AI systems, gradually weakening the mental effort required for learning.
While cognitive offloading is not new, calculators automated arithmetic and computers reduced handwriting, Brookings argues that AI has fasttracked the process, particularly in education systems where learning has become transactional.
For Mehezabin Dordi, a clinical psychologist at Sir HN Reliance Foundation Hospital, the most significant impact of AI on education is not technological but cognitive. “Deep learning is inherently effortful,” she says. “It involves struggling with the problem, developing mental images, validating tentative ideas, and discarding what doesn’t work.”
Research in learning science, she explains, consistently shows that the method of arriving at an answer matters far more than the answer itself. “When AI is used as a shortcut, it avoids those mental processes such as reasoning, abstraction, validation that give learning depth and durability,” she says.
She draws a parallel with the early introduction of calculators. “Calculators improved test scores, but when used too early, they weakened conceptual understanding,” she says.
“The tool itself wasn’t harmful. The harm came when it replaced thinking instead of supporting it.” AI, she warns, poses a similar risk. “Amplified by its conversational fluency and the confidence with which it presents information.”
Memory is another quiet casualty. “Decades of research show that active recall and retrieval practice are central to durable learning,” Dordi says.
“Every time students outsource remembering to an external tool, they practise these processes less.” Over time, this undertraining can weaken long-term retention, reduce fluency with concepts, and impair the ability to build complex knowledge structures.
AI’s quick fixes reduce exposure to this productive discomfort. “When clarity is always immediate, tolerance for ambiguity and frustration reduces,” she says. “Clinically, this looks like low distress tolerance.”
“In students, this may show up as anxiety when AI isn’t available, or difficulty persisting through complex problems without instant feedback,” she says. The risks are most acute during childhood and adolescence. “These are critical years for developing sustained attention, working memory, reasoning and metacognitive skills,” she says.
As per experts, the prefrontal cortex, the brain region responsible for critical thinking, emotion regulation, and complex problem-solving, doesn’t fully develop until around age 25. This means middle schoolers and teenagers are particularly vulnerable to forming dependencies on external validation systems.
Solving problems 24/7
When it comes to edtech platforms, AI tools are becoming central. Platforms like Byju’s and Vedantu use AI chatbots to answer student queries 24/7. They are using AI tools to support both students and educators, with quickly generated quizzes, feedback, and adjusting the difficulty level of content.
It can also help track student behaviour, learning speed, and performance. Yet, it cannot replace human encouragement or mentorship, and then there is a risk of student data privacy if platforms do not follow strict security rules.
Edtech firms argue that the answer lies not in rejection but redesign. Abhimanyu Saxena, co-founder of Scaler, an edtech platform, and InterviewBit, which helps software engineers in preparing interviews and practise coding, says AI has redefined what it means to learn using a platform.
“AI has changed both what we teach and how we teach at Scaler, far beyond speeding up content creation… through tools like Scaler Companion, learners receive 24×7, context-aware support, whether it’s guidance at the exact point they’re stuck, targeted quizzes, or ongoing mentoring on how to improve day by day,” he said.
“At the same time, we have strengthened trust and rigour by introducing proctoring and outcome-driven, hands-on evaluations. Performance is measured not by how well someone clears a test, but by how effectively they apply concepts in real-world scenarios,” he added.
At Scaler, assessments have shifted away from static submissions towards live evaluations, oral defences, whiteboard problem-solving, system design interviews. “Critical thinking has become the new coding. Learners must evaluate AI outputs, identify failure modes, and understand system implications,” he says.
According to Saxena, AI amplifies understanding but it cannot replace it. “We treat coding literacy like mathematical literacy: you may use calculators, but you must understand the underlying principles to use them effectively. Students learn fundamentals first, then gradually integrate AI tools. They must demonstrate they can solve problems manually before automating solutions. This ensures they can audit AI outputs, identify errors, and maintain systems responsibly,” he adds.
When it comes to adoption, AI has the potential to democratise learning.
“We are seeing AI significantly reduce information asymmetry by giving learners from non-elite backgrounds access to high-quality explanations, practice, and feedback that were once limited to top institutions. That said, AI also amplifies existing capabilities. Learners with strong foundational thinking and learning discipline tend to extract more value from these tools, while those without that grounding can struggle or become overly reliant on them,” says Saxena.
“The real equaliser,” he says, “is not AI alone, but the learning framework around it,” he added.
What emerges from classrooms is not a single verdict on AI, but a pattern. As the DU professor packs the assignment bundle and prepares for her next class, she reflects on what remains constant. “Education was never about only answers,” she says. “It was about learning how to live with questions, keep a thought lingering in your brain.” In the age of artificial intelligence, that may be the most human skill education has left to protect.
