Instagram is tightening its safeguards for teenage users by deploying artificial intelligence to spot users who may be misrepresenting their age. Parent company Meta announced that the platform will now actively review accounts suspected of being run by minors pretending to be adults, and will automatically switch them to teen accounts when necessary.

From what is understood, the move is part of a broader initiative to create safer, age-appropriate digital spaces for younger users. Instagram will use AI to detect behaviour and activity patterns that don’t match the age listed on a user’s profile. If an account regularly interacts with content intended for adults but behaves more like a teen account, the system will step in.

Accounts reclassified as teen profiles will face stricter privacy and safety settings. These include default private profiles, limits on who can send direct messages, and restrictions on viewing sensitive content such as violent videos or cosmetic procedure promotions.

In addition, Instagram continues to roll out digital wellbeing tools tailored for younger users. Teens receive nudges to take breaks after an hour of screen time, and a “sleep mode” feature silences notifications and activates auto-replies during nighttime hours.

The update arrives as social media platforms face heightened scrutiny over how they protect young users. With state-level age verification laws in the U.S. running into legal challenges, Meta has argued that app stores — rather than individual platforms — should shoulder the responsibility of verifying users’ ages. To tackle the issue internally, Meta says it is relying on a combination of profile clues, behavioural patterns, and account activity to determine if a user might be underage.