Meta is extending its Teen Accounts system — a suite of safety features aimed at protecting users under 18 — to Facebook and Messenger, following its initial rollout on Instagram last September, BBC reported. The expansion, which begins this week in the UK, US, Australia, and Canada, will automatically place teen users into stricter default settings, including limiting who can contact them and requiring parental permission for certain features like live streaming or disabling image protections.

The move, as per media reports, comes as major tech firms face mounting scrutiny over the safety of young users online. In the UK, the Online Safety Act now compels platforms to prevent children from encountering harmful or illegal content, adding legal pressure to existing public concern.

Meta claims the introduction of Teen Accounts on Instagram last year “fundamentally changed the experience” for young users. Since the launch, over 54 million accounts globally have been converted into Teen Accounts, with the company saying that 97% of 13- to 15-year-olds have retained the built-in protections. These protections include accounts defaulting to private, limited messaging capabilities, and blurred suspected nude images in direct messages. Younger teens (13–15) require a parent or guardian’s approval to modify these settings, while older teens (16–17) have more autonomy.

However, critics remain unconvinced about the system’s effectiveness. Andy Burrows, CEO of the Molly Rose Foundation, told the BBC it was “appalling” that Meta has yet to provide evidence of whether the safety tools reduce exposure to harmful content. “Eight months on, we still don’t know what content these measures target — or if they’re working,” he said. Matthew Sowemimo of the NSPCC echoed these concerns, stressing that safety settings alone are insufficient. “These changes must be paired with proactive efforts to stop harmful material from spreading in the first place,” he said.

Meta uses age verification tools such as video selfies and, starting this year, plans to use artificial intelligence to flag users suspected of lying about their age — a common issue, with Ofcom data from 2024 showing that nearly one in four 8–17-year-olds misrepresent their age online. Some teens who spoke to the BBC admitted it’s still “very easy” to lie during sign-up, raising questions about how well Meta can enforce its teen protections.

Despite the skepticism, some experts view the rollout as progress. Drew Benvie, CEO of digital consultancy Battenhall, described the shift as a positive sign. “For once, big social media platforms are prioritising safety over engagement,” he told the BBC, though he warned that teens often find ways to bypass restrictions. Roblox and other platforms have recently expanded parental control options, reflecting growing industry momentum toward safeguarding younger users.

Professor Sonia Livingstone of the Digital Futures for Children centre welcomed the expansion but warned that platform responsibility shouldn’t stop at new features. “Meta must be accountable not just for its safety settings, but for the broader impact of its business model on young users,” she said. Campaigners argue that companies must not shift the burden of safety onto children and their families. “Tech firms must take responsibility — and regulators like Ofcom must hold them accountable when they don’t,” Sowemimo said. As Meta prepares for the full rollout, under-18s will soon begin receiving in-app notifications alerting them to the upcoming changes, designed to curb unwanted contact and exposure to explicit content.