The next big thing
We all know about large language models (LLMs) that are facilitating computers to generate and process big data, enabling them to respond to natural language questions. But now, small language models (SLMs), which have the capability to generate human-like language and are trained on datasets that are limited, are also becoming all the rage. It is understood that they can be trained and used easily, besides being more cost-effective and using less computational power, making them ideal for specific tasks. Big tech giants Microsoft, Google, Meta, Amazon and others are investing billions in developing general-purpose LLMs to handle a variety of tasks, but they might not be able to be customised for certain needs. They may require a smaller version of generative AI. Infosys chief technology officer Mohammed Rafee Tarafdar was recently quoted as saying that several small language models for India-specific needs have already been launched and with a growing developer base, a lot more GenAI applications will be implemented for Indian and global markets. Last year saw several launches of lightweight models such as Microsoft’s Phi family of SLMs, Google’s Gemma and a smaller variant of Meta’s Llama model. Microsoft’s Sundar Srinivasan was also quoted as saying that while LLMs have pushed the boundaries of accuracy across various AI tasks throughout 2024, SLMs have driven mass adoption and true democratisation of AI. Industry experts are certainly looking at SLMs as the next big thing in AI.
A godfather’s warning
Geoffrey Hinton, a British-Canadian computer scientist who is referred to as the ‘godfather’ of AI, has raised an alarm that the technology may lead to human extinction in the next 30 years, with a 10-20% chance of that happening. Hinton was awarded the Nobel Prize in Physics earlier in 2024 for his contributions to AI. In an interview to BBC Radio 4’s Today programme, he was quoted as saying that we’ve never had to deal with things more intelligent than ourselves before discussing his evolving views on AI’s potential risks. He says that the likelihood of AI causing harm has risen in recent times. “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of,” he had said conceding on a previous account that he wished he’d thought about safety earlier – alluding to his apprehension about the potential for AI to ramp up the arms race. Hinton has been speaking about the potential dangers of unregulated AI development after he resigned from Google in 2023. He made headlines when he left Google and raised alarms of the repercussions of machines such as AI that could one day outsmart people, adding that there is also the possibility of some bad actors exploiting AI for destructive purposes.
Gemini vs OpenAI
The rivalry between Google’s Gemini and Microsoft-backed OpenAI has been escalating. According to reports, Google is set to build business and focus on Gemini AI for customers in 2025, amidst rivalry with OpenAI. As reported, its contractors are using an internal platform for comparing Gemini’s outputs to those of other AI models and are relying on Claude, a family of large language models developed by Anthropic, to improve the responses provided by its own AI model. TechCrunch reported that Google contractors are given responses generated by Gemini and Claude in response to a user prompt which they have to rate within 30 minutes on factors such as truthfulness and verbosity. Meanwhile, Google recently unveiled the Gemini 2.0 Flash experimental model in competition with OpenAI. In its blog, Google said that it is their “most capable model yet”. “With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” Google said. Additionally, it launched a new feature called ‘Deep Research’, which is capable of using “advanced reasoning and long context capabilities to act as a research assistant, exploring complex topics and compiling reports”, available for Gemini’s advanced subscribers.
Meanwhile, OpenAI, too, has unveiled new o3 and o3 mini models, which will be launched later this year. The new models are quite advanced versions and much better than its predecessors in several performance benchmarks, as per reports.
AI in Mahakumbh
As AI is increasingly becoming pervasive, why should Mahakumbh be left behind? In line with the advancing tech, the Uttar Pradesh government has planned to use AI-enabled cameras, radio frequency identification (RFID) wristbands and mobile app tracking to track the headcount of pilgrims at the upcoming religious carnival. Starting in Prayagraj on January 13, approximately 450 million devotees are likely to attend Mahakumbh, which is recognised by UNESCO as an Intangible Cultural Heritage of Humanity. The government will also launch a dedicated website and app, an AI-powered chatbot in 11 languages, QR code-based passes for people and vehicles, and a multilingual digital lost-and-found centre for visitors.