Respons fra Meta om sikkerhetsrapport

En rapport fra flere barneorganisasjoner hevder at tryggheten på Instagram er for dårlig. Meta har sendt Barnevakten en respons.

Nedenfor er responsen som Meta/Instagram sendte til Barnevakten da vi skrev om rapporten som hevder at mange av verktøyene som skal beskytte tenåringer på Instagram, ikke virker. Så kan du selv vurdere rapporten og responsen opp mot hverandre.

Respons fra Meta / Instagram
Misleading and dangerously speculative reports such as this one undermine the important conversation about teen safety. This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today. Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls. The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night. Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback – but this report is not that.
  • Consolidation of time management tools: In 2021, we released a suite of time management tools; when we later launched Teen Accounts we shared our plan to update and simplify those tools as part of Teen Accounts in order to make them easier to understand and use.
  • Child predator search terms update: This is outdated; this work is now done by an automated system backed up by human input.
Additional information:
There is a long list of issues with this analysis. A few examples:
  • This report states that ‘it’s easy to tell when a social media platform wants you to use one of its features. A platform will typically turn the feature on by default or proactively encourage you to turn it on, for example, through in-app prompts.’
    • We do exactly this. With Teen Accounts, teens are automatically defaulted into private accounts, our strictest messaging settings, Sleep Mode, Hidden Words, nudity protection and a range of other safety features – and under 16s can’t turn these settings off without a parent’s permission, and they need to have set up parental supervision in order to request permission.
    • As we announced earlier this year, since introducing Teen Accounts, 97 % of teens aged 13-15 have stayed in these built-in restrictions.
  • The report alleges that we don’t remind teens of the benefits of private accounts when they request to switch to a public account. This is inaccurate. Before older teens can switch to a private account, we show a pop-up screen reminding them of the implications – for example that anyone will be able to see their posts, Reels and Stories. Younger teens can’t switch to a public account without a parent’s permission.
Categorizations of features as ‘red’ or ‘yellow’ based on a misunderstanding of their purpose:
  • The report gives 9 safety features a ‘red’ rating based on them being ‘discontinued’, inaccurately suggesting that these features were abandoned. In fact, thanks to ongoing feedback from experts, parents and teens, and our focus on continuous improvement, we incorporated these features into newer, updated versions. For example:
    • Take A Break, which we launched in 2021, felt duplicative when we launched Teen Accounts, which does something similar by nudging teens to leave the app after 60 minutes, and defaults them into Sleep Mode overnight. We were public about this here, updating our original Take A Break announcement.
  • Several other features were rated as ‘red’ and labelled as ineffective, because the testers misunderstood and mischaracterized what they were meant to do. For example (non-exhaustive):
    • Restricting adults from starting private chats with teens they’re not connected to. The testers argue these restrictions are ineffective because teens can still message adults they’re not connected to. That entirely misses the point of these restrictions, which are designed to protect teens from unwanted contact from adults they don’t know. Teens can still initiate contact with adults, because that’s a clear signal that the contact is not unwanted, and it’s very likely that the adult is someone they know.
    • Our Hidden Words feature is described as ineffective, because it appears that it did not filter offensive messages in a conversation between their test accounts. That’s because Hidden Words only filters offensive DM requests because this is where people usually receive abusive messages – unlike your regular DM inbox, where you receive messages from friends who may joke with each other in different ways. We were clear about this distinction when we first announced Hidden Words. As a reminder, DM requests (messages from people you don’t follow) are already off by default for Teen Accounts.
    • Safety Notices are described as ineffective by testers because they didn’t appear in their test conversations with adults. This misunderstands how Safety Notices work. They do not appear in every chat between a teen and an adult. They’re designed to appear when a teen is in a chat with an adult who has shown potentially suspicious behavior – such as being reported or blocked by a teen, among many other signals. We recently shared that, in June alone, teens blocked accounts 1 million times and reported another 1 million after seeing a Safety Notice.
    • Preventing potentially suspicious adults from interacting with teens. Again, the testers claim this isn’t working because they’ve misunderstood how we define ‘suspicious’. As we’ve explained publicly across several announcements, we identify potentially suspicious adult accounts based on specific signals and behaviors – such as whether they’ve been recently blocked or reported by a teen (among many other signals). Depending on the number and strength of these signals, we take precautionary steps to prevent them from finding, following and interacting with teen accounts. The testers inaccurately assumed that, because their adult test accounts could request to follow teens, that these protections weren’t working.
    • Nudging people to be kind in DMs: The testers argued this was red because they weren’t seeing the nudges in the DM chats of their test accounts. This is because – as was made clear in the original blog post announcing the feature – these are reminders that appear at the bottom of new chats with creators, and was designed to encourage more respectful outreach between people and public figures.
    • ‘Not Interested’ feedback: The testers claim that sharing this feedback has no impact. This is inaccurate, as we take it as an important signal for future recommendations. People can also hide multiple pieces of recommended content in Explore at a time, which also informs future recommendations, and they can use the Hidden Words tool to prevent content containing specific words or phrases from being recommended to them. Last year, we also announced a new way for people to reset their recommendations altogether, if they feel they’re no longer interested in the topics they’re seeing and want to start fresh.
Criticism of reporting tools
  • This report includes various criticisms of our reporting tools. While no system is perfect and we continue to work on making these tools more intuitive and effective, many of these criticisms are unfounded. For example:
  • The report alleges that we design our reporting tools to discourage adoption. This is fundamentally untrue.
    • Reports from teens across DM and Messenger increased by over 60 % in 2024 from 2023, in part thanks to improvements we’d made to our reporting flows.
  • The report also claims that teens can’t easily report sexualized comments or messages. This is not true. Every comment and message on Instagram can be reported in just a couple of taps, with an option to report for sexual solicitation.
  • The report claims that teens cannot report ‘unwanted advances’. While the report fails to specify what is meant by ‘unwanted advances,’, we offer a range of potential harms teens can choose from when reporting a post, comment, profile or message on Instagram – and teens can always block or restrict an account they feel is bothering them.»
Link to a press release related to this topic:
I also recommend you reading this recent BBC news article: