For the past several years, artificial intelligence has occupied a mostly passive role in a consumer’s life. It summarised documents, or helped write emails. These functions, while useful, kept AI in the realm of text prediction, never straying too far from its humble chatbot origins. That’s changing fast.
The latest iterations of AI are no longer waiting to be prompted. They’re learning to follow up, complete tasks, and even operate independently. Whether through Meta’s new AI Studio or OpenAI‘s recently launched ChatGPT agent, a new class of AI is emerging — more like an unpaid intern.
From persona to pursuit
When Meta launched its AI Studio, it pitched the product as a way for creators and businesses to build custom chatbots — avatars with memory, tone, and curated personalities. These AI characters could simulate chefs, stylists, influencers, even fictional characters. But beyond the fanfare of personalities lies a
deeper ambition: conversation that doesn’t end.
According to internal documents first reported by Business Insider, Meta is training these bots to send follow-up messages. Under a project codenamed ‘Omni’, these bots can reinitiate conversation with users if certain criteria are met.
A user must have messaged the bot at least five times in a fortnight, and any proactive outreach is limited to within 14 days of first engagement. If the user ignores the first message, the bot backs off.
Meta frames this as a feature, not a bug, to boost interaction. After a conversation is initiated, AIs in Meta AI Studio can follow up with you as well to share ideas or ask additional questions.
But the thing is that Meta is no longer content with reactive tools, but wants these agents to push for more information. For creators, the appeal is obvious: something that keep fans engaged without their direct involvement. But it also places the platform squarely in the middle of one-sided dynamics, where it is automating intimacy in ways that may be useful — or deeply uncanny.
ChatGPT gets a body
While Meta trains its bots to carry a conversation forward, OpenAI has released something far more ambitious. Recently, the company introduced a new class of tools it simply calls ‘ChatGPT agents’. Unlike their predecessors, these agents can perform tasks from start to finish, using what OpenAI calls a ‘virtual computer’. The agent can browse websites, fill out forms, update your calendar, write and send emails, even plan trips and execute transactions.
It combines OpenAI’s earlier experimental tools like ‘Operator’, which could control websites, and ‘Deep Research’, which conducted advanced comparative analysis. As OpenAI put it in its release notes: “ChatGPT now thinks and acts”. The agent is designed to narrate its steps, ask for permission before taking significant actions, and can be interrupted or redirected at any time. High-risk behaviours, such as financial transfers or health decisions, remain restricted or supervised.
Autonomy with boundaries
Both Meta and OpenAI emphasise user control. Meta’s bots cannot follow up indefinitely. OpenAI’s agents pause for confirmation before executing tasks. These safeguards are meant to reassure users in an age of anxiety. Where once AI waited for input, it now initiates. Where once it replied, it now completes. It’s a functional shift with philosophical weight: AI is no longer a tool, but a proxy at work.
This is not necessarily a cause for alarm. For many users, especially freelancers, small businesses, and overloaded professionals, agentic AI could represent real liberation, an end to digital drudgery. For platforms, it means deeper engagement and, potentially, new monetisation pathways. Meta, for instance, expects generative AI to drive ‘billions’ in revenue in 2025, according to internal projections.
As AI agents begin to rise, the question becomes less about capability and more about governance. Who controls the agent’s actions? Who is liable when it makes an error? What happens when an agent imitates you so well that others can’t tell the difference?
OpenAI’s Model Context Protocol, a technical framework meant to allow interoperability between agents and platforms, is a step toward standardisation.
But it is not the same as accountability. Meta’s enthusiastic bots are tightly sandboxed today, but their existence suggests a possibility where platform-owned personalities may be programmed in some way to serve the platform’s goals. The rise of AI agents is not just a footnote in the story of AI, it is now the next chapter. And like any delegation of power, it brings with it a paradox: the more tasks you offload to these agents, the more you risk forgetting what it means to do them yourself.
