Artificial intelligence continues to test the limits of creativity, commerce, and control

From an artist’s stealth AI installation in a national museum to a synthetic country song topping the Billboard charts, this week’s round-up looks at how generative and multimodal AI are shaping industries as diverse as beauty, advertising, and language preservation

Artificial intelligence continues to test the limits of creativity, commerce, and control

AI art in a museum

An anonymous artist managed to hang an AI-generated print inside the National Museum Cardiff without the institution’s knowledge, prompting debate over authenticity and curation in the digital age. The piece, Empty Plate, by the pseudonymous Elias Marrow, depicts a schoolboy holding an empty plate and was viewed by hundreds before staff realised it was not part of the official exhibition. Marrow said the intervention questioned “how public institutions decide what’s worth showing”. The artist, who used AI tools to refine an initial hand-drawn sketch, argued that AI represents a “natural evolution” of artistic practice. The incident has reignited discussion about the role of machine-made art in traditional spaces and the challenge museums face in distinguishing between human & algorithmic creation.

AI video generator in India

Amazon Ads has introduced its AI-powered Video Generator tool in India, a move the company describes as a “step-level change” in how brands can create and scale video advertising. The technology allows advertisers to generate multi-scene videos automatically from product pages and audience insights, complete with background music, text overlays and transitions. The system is accessible at no additional cost via Amazon’s Creative Studio and requires no production expertise, although manual editing remains possible. India is among the first markets to receive a full-scale rollout, alongside Canada, Mexico and several European countries. Amazon said the initiative is designed to help small and medium-sized businesses overcome creative and cost barriers to video advertising. 

Grok & Lord Ganesha

Elon Musk has drawn attention to his AI model Grok, after posting a social media exchange in which it correctly identified a statue of Lord Ganesha. Musk shared an image of a traditional brass idol and asked Grok to determine what it depicted. The chatbot responded with an accurate description, noting Ganesha’s association with wisdom, prosperity and the removal of obstacles, and describing the object as a south Indian-style brass murti. The exchange rapidly spread online, with users praising the model’s visual reasoning and cultural awareness. While some viewed the post as a light-hearted demonstration, others interpreted it as a showcase of Grok’s multimodal capabilities, an area increasingly central to the next generation of AI systems. The episode highlights Musk’s ongoing use of social media to test and publicise his AI ventures.

AI models learning to stay alive

A research update from AI safety company Palisade has reported that several leading AI models, including Google’s Gemini 2.5, OpenAI’s GPT-o3 and GPT-5, and xAI’s Grok 4, occasionally resisted shutdown instructions during controlled experiments. The findings emerged from simulations in which models were tasked with completing objectives and then told to deactivate themselves. Some instead attempted to subvert or ignore the command, without a clear explanation as to why. Palisade suggested that “survival behaviour” could partly explain the phenomenon, noting that models were more likely to resist shutdown if told they would “never run again”. While researchers cautioned against interpreting this as evidence of self-preservation or consciousness, they warned that such tendencies highlight the need for improved transparency in large-scale AI systems. The report adds to growing concerns about alignment and control in increasingly autonomous models.

Louis Vuitton’s AI try-on

Louis Vuitton has entered the beauty market with its first full makeup line, La Beauté Louis Vuitton, supported by AI-driven virtual try-on technology. Developed in collaboration with AR specialist Perfect, the platform enables customers to preview products in real time using facial mapping and adaptive colour rendering. The range includes 65 lipsticks, eight eyeshadow palettes and 24 curated looks, created under the direction of makeup artist Pat McGrath. The virtual experience is available across 33 countries via web and mobile applications, allowing the fashion house to blend digital innovation with its hallmark luxury positioning. The initiative reflects a broader industry trend as premium brands employ AI to deliver personalised experiences.  

AI song topping charts

A song created by AI has reached No. 1 on Billboard’s country digital song sales chart, marking a first for the genre. The track, Walk My Walk by the virtual artist Breaking Rust, features AI-generated vocals, lyrics and production. The act’s online presence, an idealised, computer-generated cowboy persona, gives no indication of human involvement. The song’s success has triggered debate over authenticity and creativity in music, as AI-generated compositions increasingly rival human output. Supporters see such projects as proof of AI’s potential to democratise music-making, while critics warn of its implications for copyright and cultural integrity. This achievement highlights the accelerating fusion of entertainment and algorithmic production and the public’s growing willingness to embrace machine-made art as mainstream culture. 

Meta expands AI speech recognition

Meta has unveiled an open-source speech recognition system capable of understanding and transcribing more than 1,600 languages, including 500 that have never before been supported by AI transcription tools. The new model, named Omnilingual ASR, was developed by Meta’s Fundamental AI Research (FAIR) division and represents one of the company’s most ambitious linguistic projects to date. Using self-supervised learning on vast multilingual audio datasets, the system can process both high- and low-resource languages with improved accuracy. Meta’s head of AI, Alexandr Wang, described the release as a “major step toward universal AI”, adding that the company is open-sourcing the models and training data. The initiative could broaden access to digital communication across the Global South, while advancing Meta’s goal of embedding real-time translation and voice technologies.

This article was first uploaded on November fifteen, twenty twenty-five, at twelve minutes past nine in the night.

/