OpenAI chief executive Sam Altman has confirmed that ChatGPT will soon allow adult-oriented content for verified users, marking a notable shift for one of the world’s most closely watched artificial intelligence platforms. The new feature, expected to roll out globally by the end of 2025, will enable users above 18 to access erotica and other forms of mature content after passing an age-verification process.
Until now, OpenAI had maintained a blanket restriction on sexual material, citing the potential for misuse and reputational risk. The change reflects a broader evolution in how AI companies are adapting to user demand while navigating sensitive regulatory terrain. Altman has described the move as an attempt to “treat adults like adults”, while ensuring strong protections for minors through age-check systems and content monitoring. The development underscores a wider trend: generative AI tools are increasingly being used to create or simulate adult content, even as companies and regulators debate where the boundaries of consent, privacy and creativity should lie.
Across the internet, several start-ups have already built businesses around this intersection of technology and intimacy. Apps such as Replika, Nomi and Candy.ai allow users to chat with AI companions capable of romantic or sexual conversation. Others, including DreamGF and CrushOn.AI, combine large language models with image generators, producing customised avatars and roleplay experiences that mimic human interaction. Some apps offer subscription-based “uncensored modes” where adult material can be generated on demand.
In parallel, open-source communities have developed their own AI models capable of creating explicit imagery or video without moderation filters. Platforms like CivitAI host downloadable tools for “NSFW” (not safe for work) image generation, which users can run locally. The resulting ecosystem is vast and largely unregulated, spanning from artistic erotica to deepfake pornography. Industry researchers in studies estimate that AI-generated adult visuals already account for a growing share of synthetic media circulated online, with some companies racing to develop detection algorithms that can flag non-consensual material.
Unlike traditional pornography, AI-generated content can be produced without human actors, theoretically eliminating issues of exploitation. Yet the same technology enables the creation of realistic deepfakes using the likeness of real people without consent. Governments in Japan, South Korea, and the
European Union have already begun drafting or enforcing laws to restrict non-consensual deepfake pornography, citing privacy and defamation concerns.
In the US, several states are pursuing regulation through age-verification laws. Utah, Louisiana, and Texas have introduced requirements for adult websites to confirm user age through digital identity checks. The United Kingdom’s Online Safety Act has similar provisions, mandating that adult platforms deploy “effective age assurance” mechanisms. OpenAI’s upcoming verification framework, which is likely to use ID scanning and behavioural modelling, will be among the first large-scale tests of such technology within an AI context.
Other technology firms are also beginning to experiment in this space. Elon Musk’s xAI has announced plans to introduce a “spicy mode” for its chatbot, Grok, which will permit adult conversation and limited explicit content. Smaller start-ups, such as Unstable Diffusion, continue to develop image models that explicitly market their ability to generate NSFW material. Even virtual reality firms are incorporating AI to create personalised, interactive adult experiences, merging generative imagery with motion and voice synthesis.
For companies like OpenAI, the commercial logic is straightforward. Adult entertainment has historically accelerated adoption of new technologies, from VHS to online streaming and virtual reality, and AI is likely to follow that pattern. Personalised, private and on-demand experiences offer strong economic potential. However, the reputational risk remains high. Any failure to block underage access, prevent misuse of personal likenesses or stop the spread of non-consensual material could draw regulatory and public backlash.
As AI becomes part of everyday life, including emotional and intimate interactions, the question is no longer whether such content should exist, but how safely it can be delivered. A verified, regulated approach with explicit consent standards and transparent moderation may offer a more controlled path than the unregulated alternatives proliferating online.
OpenAI’s decision effectively brings adult-themed AI into the mainstream technology economy, where it can be scrutinised, monitored and taxed. It also signals a recognition that generative AI’s creative potential cannot be fully explored without addressing the themes of intimacy, desire and personal expression that are central to human communication.
Whether this approach succeeds will depend on how convincingly platforms can combine user freedom with safeguards. The rollout will be watched closely not just by regulators and rights groups, but also by rival AI companies assessing whether adult content, long considered taboo in mainstream tech, is now becoming another side for innovation.
