Meta, the parent company of Instagram, has announced new features aimed at protecting teenagers and combating potential scammers on its platform. With concerns around harmful content and its impact on young users growing, Meta revealed plans to trial features that blur messages containing nudity to safeguard teenagers from malicious individuals. Using on-device machine learning, the protection feature for Instagram’s direct messages will analyze images for nudity before they are sent.
This protective feature will be automatically enabled for users under 18, with Meta urging adults to activate it as well. Even in end-to-end encrypted chats, the nudity protection feature will still function, ensuring privacy while emphasizing safety. Additionally, Meta shared its efforts to develop technology to identify accounts involved in sextortion scams, with plans to test pop-up messages to warn users who may have interacted with such accounts, showing a proactive approach to tackling online exploitation.
Amidst mounting legal pressure in the United States and Europe, Meta’s actions follow commitments made in January to conceal more sensitive content from teenage users on Facebook and Instagram. With attorneys general from 33 U.S. states filing a lawsuit in October alleging misinformation regarding platform risks and the European Commission seeking clarification on child protection measures, Meta’s proactive steps demonstrate a commitment to addressing concerns surrounding harmful online content and protecting young users.