Two popular AI-powered Android apps available on the Google Play Store have exposed massive amounts of sensitive user data, totaling more than 12TB and containing millions of personal files, due to critical security misconfigurations, revealed cybersecurity researchers.
The breach, uncovered by Cybernews researchers, affected users across more than 25 countries, including the United States, Germany, France, China, and Brazil. The exposed information includes personal photos, videos, AI-generated media, and highly sensitive Know Your Customer (KYC) documents such as identity proofs, addresses, phone numbers, and other personally identifiable information, creating what experts describe as a “treasure trove” ripe for identity theft, fraud, and other cybercrimes.
AI apps leak data: Here is what was exposed
The primary app attacked is Video AI Art Generator & Maker, developed by Codeway. Launched on June 13, 2023, it had collected over 500,000 installations and more than 11,000 reviews before the issue surfaced. Researchers found that a misconfigured Google Cloud Storage bucket allowed unauthenticated public access to user-uploaded content, leaking:
– More than 1.5 million user images.
– Over 385,000 videos
– Millions of AI-generated files
In total, 8.27 million files accumulated since launch were left exposed, amounting to over 12TB of data.
A second app from the same developer, IDMerit — focused on identity verification — compounded the risk by exposing KYC-related data from users in at least 25 countries, with the US heavily represented. This included full names, addresses, dates of birth, national IDs, and contact details, totaling around a terabyte of sensitive records.
Caused due to misconfigurations and hardcoded secrets
The exposures stemmed from two common yet dangerous practices:
Misconfigured cloud storage: Google Cloud buckets were set to allow public access without authentication.
Hardcoded secrets: Sensitive credentials (API keys, database access) embedded directly in app code, making them easily harvestable. Cybernews analysis of Android AI apps showed 72% contained at least one hardcoded secret, averaging 5.1 per app, with many tied to Google Cloud infrastructure.
Response from developer and platform
Following disclosure:
– Codeway secured the affected buckets and data access for IDMerit by February 3.
– The Video AI Art Generator & Maker app was removed from public search results on the Google Play Store.
Google has not issued a specific statement on these incidents, but the cases underscore ongoing challenges in vetting AI apps for security.
What to do if you used these AI apps
The leaked data poses severe threats, including targeted scams, impersonation, financial fraud, and privacy violations. Researchers urge Android users to:
– Exercise extreme caution with AI photo/video editing or identity verification apps, especially from lesser-known developers.
– Check for the Google “Verified Developer” badge and review developer portfolios.
– Scrutinise app permissions before granting access to camera, storage, or documents.
– Avoid uploading sensitive identity documents unless absolutely necessary and from trusted sources.
As AI apps proliferate on app stores, this incident serves as a stark reminder of the need for robust security practices in development and stricter oversight to protect millions of users from unintended data exposures.
