Meta is facing another lawsuit, and this time, it involves the company’s ambitious Meta Ray-Ban smart glasses. The parent company of Facebook and Instagram now faces a proposed class action lawsuit, accusing the company of misleading consumers about the privacy of its popular Ray-Ban Meta AI smart glasses. The suit claims that footage captured by the devices, including highly intimate and sensitive moments, was transmitted to overseas contractors for review without users’ knowledge or consent.

The complaint was filed in the US District Court for the Northern District of California (San Francisco Division), and was brought by plaintiffs Gina Bartone of New Jersey and Mateo Canu of California. They are represented by the Clarkson Law Firm, known for pursuing privacy and consumer protection cases against major tech companies.

Core allegations against Meta: False advertising and privacy breaches

The lawsuit focuses on Meta’s marketing claims that the AI smart glasses are “designed for privacy, controlled by you” and “built for your privacy.” Plaintiffs argue that these statements created a reasonable expectation that recordings would remain private and user-controlled, with no indication that human reviewers could access them.

According to the complaint, footage from the glasses, including videos of people undressing, using the bathroom, engaging in sexual activity, viewing pornography, handling financial information like bank cards, and other private home moments, was sent to Meta’s servers and routed to subcontractors in Kenya for data labelling and AI training purposes. Workers reportedly described seeing “everything,” from living rooms to naked bodies and explicit encounters, often with users appearing unaware they were being recorded.

The suit alleges violations of privacy laws and false advertising, asserting that Meta failed to disclose the potential for human review of such sensitive content. This omission allegedly led users to capture intimate moments under false pretenses of privacy.

The case was sparked by an investigative report from Swedish newspapers Svenska Dagbladet (SvD) and Göteborgs-Posten, which revealed that contractors employed by Meta subcontractor Sama in Nairobi, Kenya, regularly reviewed user footage for AI improvement. Interviewees described exposure to disturbing and highly personal clips, including bathroom visits, nudity, sex scenes filmed during intimate acts, and accidental captures of sensitive personal data.

Meta’s response to the fiasco

A Meta spokesperson, Christopher Sgro, had initially addressed the underlying concerns, stating, “Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.” He added that human review occurs only when users share content with Meta AI, for purposes like improving the experience, a practice common among tech companies. Meta claims it filters data to protect privacy and blurs faces to prevent identification.

The company, however, has not issued a direct comment on the specific lawsuit, but the allegations highlight ongoing scrutiny of wearable AI devices. The UK’s Information Commissioner’s Office has contacted Meta over the report, seeking details on compliance with data protection laws.