In a recent edition of Brandwagon, I wrote about AI companions. I shared how, from offering late-night conversations to simulating emotional support, AI companions are carving out a niche in the digital intimacy economy. Today, I dig deeper and ask a very pertinent question. What actually happens to the data that users feed into them?
These platforms are powered by large language models (LLMs) trained on user interactions. That includes everything from a casual hello to intimate disclosures about relationships, trauma, or identity. While marketed as private, judgment-free spaces, AI companions are also data-hungry systems that often operate in legally grey areas when it comes to ownership, consent, and use of personal information.
Data as currency
“AI companion platforms often collect extensive personal data, including users’ emotional expressions and behaviours,” Lalit Kalra, Partner – Cybersecurity Consulting, EY India told financialexpress.com. “While these interactions may feel intimate, they are governed by platform policies that may lack transparency. The data is frequently used for user profiling, targeted advertising, or model training, sometimes without clear user consent.”
Kalra warns that while some platforms claim to anonymise data, “emotional data is inherently identifiable. Users must understand that their disclosures could be stored indefinitely, repurposed, and monetised often beyond the scope of the original interaction.”
“The user is essentially feeding the system with their most personal, and at times, intimate moments. Data is the oil that moves that avatar,” Sanjay Trehan, digital media advisor, said. “There is a real possibility of misuse, and consumers must exercise as much caution as they would in the real world.”
Replika, for instance, captures user messages, photos, videos, and sensitive details such as sexual orientation, religious beliefs, and health information. Although it claims not to use this data for advertising, it does license user content broadly, allowing it to be stored, modified, and reused within its services. Character.ai also collects detailed usage data, IP addresses, browsing habits, and device IDs, which can be shared with advertisers.
Legal safeguards fall short
While global data protection frameworks such as the EU’s GDPR, California’s CCPA, and India’s DPDPA provide guardrails, they struggle to keep pace with the complexity of AI systems.
“These laws provide a strong foundation, around consent, minimisation, and user rights, but AI companions operate in grey zones,” Dikshant Dave, CEO of Zigment AI, commented. “Emotional data, conversational nuance, and inferred mental states aren’t always explicitly covered. These are areas where regulation hasn’t caught up with technological complexity.”
“Users rarely know if their emotional disclosures are being used to train future models, create targeted psychological profiles, or fuel recommendation engines. Clearer disclosures and user-controlled data deletion mechanisms are urgently needed,” he added
Trust at risk
The reputational and regulatory risks for companies in this space are significant. “If users feel betrayed, because their data is leaked, misused or commercialised without clarity, trust collapses,” Dave highlighted. “This is especially critical for platforms dealing with emotionally vulnerable users.”
Kalra notes that current laws struggle with the traceability of AI data flows. “The nature of AI makes it challenging to trace how personal data is processed, raising concerns about consent and data minimisation. Companies face significant risks if they mishandle sensitive data, including legal penalties and public backlash.”
A recent lawsuit in the US, Garcia v. Character Technologies, has even raised questions about whether AI companions should be treated as “products” under liability law. A California judge’s preliminary ruling opened the door for platforms, and even AI model developers, to be held accountable for harm caused by outputs from such apps.
India’s cautious adoption curve
In India, adoption is growing, especially in wellness and entertainment. “Indian users are rapidly adopting AI-driven engagement tools,” Dave notes. “But culturally, emotional expression in digital spaces is still evolving, and trust remains a significant barrier. Brands that enter this space will need to demonstrate deep sensitivity to privacy, localisation, and psychological safety.”
“3(a), of the DPDP Act applies to all processing of digital personal data (within India, and to foreign entities) processing data in connection with offering goods/services to Indian data principals (legal entities),” Siddarth Chandrashekhar, advocate, Bombay High Court, highlighted. Furthermore, under Indian law, any company or organisation that creates or runs an AI companion becomes a “Data Fiduciary,” meaning it is legally responsible for protecting users’ data, this includes ensuring accuracy, applying security safeguards, respecting user rights, and reporting any data breaches to both the authorities and affected users, he added.
AI companions today offer more than conversation; they offer companionship, memory, and even affection. But these interactions are also quietly fuelling product development pipelines and machine learning datasets. “There’s a risk that AI-generated empathy may lead to long-term emotional dependence,” Trehan warned. “While AI may offer short-term succour, it could reinforce isolation in the long term.” And beyond the psychological implications, Kalra pointed out that “platforms own the data, not the user. It may be accessed by internal teams, used to improve AI models, or shared with third-party advertisers.”
AI companions simulate empathy, which can ultimately lead to the betterment of society; however, these systems are built to learn and monetise upon it.