Meta rolls out new online safety measures for teens and children

Webdesk
|
24 Jul 2025
Meta has introduced a new series of safety updates across its platforms, including Instagram and Facebook, aimed at improving protection for teens and children online.
The updates focus on three core areas: enhancing direct messaging safeguards for teenagers, expanding nudity protection features, and strengthening protections for accounts primarily featuring children and managed by adults.
For teen users, Meta has added more contextual information to direct messages, such as account creation date and safety tips, along with a one-click option to block and report message senders.
In June, teen users utilised safety alerts to block over 1 million accounts and reported another 1 million following prompts by the platform.
To address rising concerns over cross-border sextortion scams, Meta has launched a new “Location Notice” feature on Instagram. This alert notifies users when they are communicating with someone located in another country—a method often linked to online exploitation. The company stated that more than 10% of users clicked on the alert to learn more about protective actions.
Nudity protection, which automatically blurs suspected nude images in direct messages, remains widely enabled. As of June, 99% of users—including teens—had the feature activated. Meta noted the tool led to a 45% drop in the sharing of explicit content after users received a warning.
Meta is also extending teen-level protections to adult-managed accounts that prominently feature children, such as those run by parents or talent agents. These updates include stricter message controls, the automatic filtering of offensive comments using Hidden Words, and reduced visibility of such accounts to suspicious users in search and recommendations.
The company has also restricted monetisation features on these accounts, such as removing the ability to accept gifts or offer paid subscriptions.
Comments
0 comment