Ravi Naik, legal director for the data rights agency AWO, expressed skepticism about Facebook’s explanation, saying the company was not being clear about how and why it might be profiling its users beyond simply removing their content.
He argued that if Facebook’s suicide interventions only operate “as far as necessary” to protect users’ “vital interests”, then such practices would likely be permissible under GDPR.
“The concern for the DPC is said to be that the platforms are going further. It is not clear why they would need to, or what the justification for those wider practices is.
“If the issue is that the content moderation systems are tied to wider profiling techniques that are not necessary or proportionate to protect those vital interests, then Instagram and the DPC are right to be concerned about such techniques operating in opaque and unanticipated ways.”
Cutting and eating disorder content out of control
To check Instagram’s systems, the Telegraph created several test profiles in the UK and the US with minimal biographical information, and searched for obvious mental health keywords.
For each keyword, it took less than a minute to find and follow a handful of profiles that had broken, or skirted close to breaking, Instagram’s rules against promoting and glamourising self-injury and eating disorders.
Immediately, the app’s algorithms began bombarding the Telegraph with “suggested accounts” that had flagrantly broken its rules. In many cases, their usernames, display names or biographies contained well-known terms associated with advocating eating disorders
After following enough of those accounts, new rule-breaking content was injected into the app’s flagship “Explore” screen, which Instagram uses to highlight new content from accounts its users do not yet follow, as well as to promote its own products. The Telegraph found not examples of this happening with cutting content.