Nvidia’s Alpamayo
Nvidia has unveiled Alpamayo, a family of open AI models, simulation tools and datasets aimed at pushing autonomous vehicles beyond pattern recognition towards reasoning. The models use chain-of-thought vision language action systems that can work through rare or ambiguous driving scenarios step by step, a capability the company argues is essential for trust and safety. Rather than running directly inside cars, Alpamayo is positioned as a large teacher model that developers can adapt and distil into their own autonomous driving stacks. The release underscores Nvidia’s strategy of shaping industry standards by offering open ecosystems rather than finished products. Jensen Huang, the company’s chief executive, framed the move as a turning point where physical machines begin to reason about the real world, with robotaxis among the earliest commercial beneficiaries.
Govt-run AI clinic
India has launched its first government-run artificial intelligence clinic at the Government Institute of Medical Sciences in Greater Noida, signalling a significant step in embedding AI into public healthcare. The clinic will use AI systems alongside genetic screening to analyse diagnostic data ranging from blood tests to imaging scans, including X-rays, CT scans and MRI reports. Hospital officials say the tools are designed to support doctors rather than replace existing clinical judgment, positioning AI as an assistive layer within the hospital workflow. The initiative reflects broader ambitions by Indian authorities to modernise public health infrastructure without relying solely on private providers. If scaled, such clinics could help address shortages of specialist expertise, particularly in early detection of cancer and chronic diseases, while raising new questions about data governance, accuracy and accountability in state-run AI systems.
Maduro frenzy
AI-generated images depicting the arrest and removal of Venezuelan president Nicolas Maduro spread rapidly on social media following statements by Donald Trump about a military strike. Fabricated photos and videos, some showing US agents escorting Maduro, circulated alongside genuine footage, blurring the line between fact and fiction. Vince Lago, the mayor of Coral Gables, Florida, posted the fake photo of Maduro being escorted by the DEA agents to Instagram, saying that the Venezuelan president “is the leader of a narco-terrorist organisation threatening our country”. Lago’s post is still up as of this writing. Fact-checking group NewsGuard later identified multiple fake or misleading visuals that had collectively reached millions of users. The episode illustrates how generative AI is amplifying the speed and scale of political misinformation, particularly during crises, and underscores the challenge facing platforms and governments trying to contain viral falsehoods without curbing legitimate reporting.
AI helmet turns vigilante
A Bengaluru software engineer has drawn attention for repurposing artificial intelligence into a personal traffic enforcement tool. By embedding a camera and AI agent into his helmet, the rider can automatically flag traffic violations in near real time and send evidence, including location and number plate details, directly to police systems. “I was tired of stupid people on road, so I hacked my helmet into a traffic police device,” wrote the techie named Pankaj Tanwar on X. The episode reflects both frustration with weak traffic discipline and the growing accessibility of AI tools for civic surveillance. While some have praised the initiative as a creative response to urban chaos, it highlights a grey zone between citizen innovation and vigilantism.
Alexa extends reach
Amazon has introduced Alexa.com, extending its voice assistant into a broader, task-oriented AI experience across web, mobile and voice interfaces. The revamped Alexa is designed not only to answer questions but to complete actions, from managing calendars and to-do lists to controlling smart homes and making reservations. For Amazon, the move is an attempt to reassert Alexa’s relevance as competition intensifies from large language model-based assistants.
China regulates AI firms
China’s cyberspace regulator has proposed draft rules to rein in AI ‘boyfriends’ and ‘girlfriends’, reflecting official unease about emotionally responsive chatbots. The Cyberspace Administration of China wants platforms to intervene when users show signs of self-harm or suicidal thoughts and to strengthen protections for minors. The draft bans chatbots from encouraging harmful behaviour, generating obscene or violent content, or simulating real personal relationships such as family members. While the rules encourage age-appropriate companionship services for the elderly, they draw a clear boundary around emotional manipulation. The proposals highlight Beijing’s approach to AI governance with useful applications while imposing strict controls on areas seen as socially or psychologically sensitive.
