In a stunning development propelling low-cost AI models in the Education sector, a 17-year-old student in Uttar Pradesh’s Bulandshahr has developed an Artificial Intelligence (AI) teacher robot. Aditya Kumar of Shiv Charan Inter College has built an AI teacher robot named Sophie, equipped with an LLM chipset.
The robot was taken to the school as well where it also introduced itself. “I am an AI teacher robot. My name is Sophie, and I was invented by Aditya,” the robot is heard saying in Hindi.
“I teach at Shivcharan Inter-college, Bulandshahr… Yes, I can properly teach students…” it said.
Robot answers questions
In the video, Aditya asks several questions to the AI robot like the name of India’s first President and first Prime Minister. The robot, Sophie, responds with answers – Dr Rajendra Prasad and Pandit Jawaharlal Nehru respectively.
#WATCH | Bulandshahr, UP | A 17-year-old student from Shiv Charan Inter College, Aditya Kumar, has built an AI teacher robot named Sophie, equipped with an LLM chipset.
— ANI (@ANI) November 29, 2025
The robot says, "I am an AI teacher robot. My name is Sophie, and I was invented by Aditya. I teach at… pic.twitter.com/ArJYSsf39F
When asked if it can teach properly, the robot replies, “Yes, I can teach properly.” It also does a sum and defines what electricity is, showing proficiency in basic knowledge.
What is a LLM chipset?
Talking to ANI, Aditya said he used LLM chipset to build the robot as it is also used by companies for making robots. He explained that for now, the robot can only speak and he will make modifications to make it write as well.
“I have used an LLM chipset to build this robot, which is also used by big companies that make robots. It can clear students’ doubts… For now, she can only speak. But we are designing it so it can write as well soon… There should be a lab in every district so students can come there and do research,” he added.
LLM chipsets or Large Language Models are hardware containing GPUs, NPUs, and custom AI accelerators, that are optimized to handle the intensive computational demands on a larger scale. These are crucial for both training these models (computationally intensive) and running them (inference) efficiently.
