Instagram’s ‘Teen Accounts’ Fail to Protect Children: New Report Reveals Persistent Risks
Instagram continues to pose significant risks to young users, even with the introduction of new ‘Teen Accounts’ designed to provide enhanced protection and parental controls, according to a recent investigation by BBC News Investigations. The report, released by online child safety charity 5Rights Foundation, reveals critical vulnerabilities within these accounts, demonstrating that they are not effectively mitigating harmful content exposure for teenage users.
The research involved creating multiple fake Teen Accounts using false birthdates – circumventing the platform’s age verification processes – and found immediately upon signup that the accounts were directed towards adult accounts to follow and message. Furthermore, algorithms recommended posts containing significant amounts of hateful comments and sexually suggestive imagery, contradicting Meta’s claims of offering a safer environment for young people.
The 5Rights Foundation’s investigation highlighted concerns beyond simple content exposure. Researchers noted recommendations of commercialized content and the addictive nature of the app itself. Baroness Beeban Kidron, founder of 5Rights Foundation, emphasized the inadequacy of the Teen Accounts, stating: “They are not checking age, they are recommending adults, they are putting them in commercial situations without letting them know and it’s deeply sexualised.”
The launch of Instagram’s Teen Accounts follows the passage of the UK Online Safety Act, which mandates that platforms implement robust measures to protect children online. The Act is scheduled to be fully enforced within three months, requiring social media companies like Meta to demonstrate systems in place including rigorous age checks, safer algorithms designed to avoid recommending harmful content, and effective content moderation strategies. Ofcom, the UK’s independent communications regulator, will oversee these efforts.
In a related development, BBC News Investigations also uncovered concerning activity on X (formerly Twitter), where groups dedicated to self-harm – often referred to as “communities” – boast tens of thousands of members and share graphic images and videos related to self-injury. Researchers, including Becca Spinks, discovered communities with over 65,000 members actively engaged in sharing such content, sometimes even prompting discussions on methods of self-harm.
X has not responded to requests for comment regarding these findings. However, in a submission to an Ofcom consultation last year, the platform stated its commitment to complying with the Online Safety Act and maintaining a safe service, asserting it has “clear rules in place to protect the safety of the service and the people using it.”
The investigation underscores the ongoing challenge of effectively regulating social media platforms and protecting children from harmful content. It raises serious questions about the true efficacy of measures like Teen Accounts and highlights the need for stronger oversight and proactive interventions.
Post Comment