Dear Nick Clegg, you and Facebook are failing our children – Telegraph.co.uk


Dear Sir Nick Clegg,

As Facebook’s Head of Global Affairs, you have accepted that, in terms of tackling the problem of the worrying ease with which children can access harmful, upsetting and even dangerous content on Facebook, WhatsApp and Instagram, “it’s not something we are completely on top of”. I welcome that recognition but regard it as an understatement.

When you were asked about this in a recent interview, I detected a sense of frustration, and your answers conveyed the idea that parents don’t understand the difficulties Facebook has in tackling this. I can only say that any frustration you feel is matched by my own that social media firms have failed to address the scale of concerns that parents and children have raised.

Some action has been taken, but children can still see inappropriate content across social media platforms. A platform with a third of the world’s population using it may have practical problems in what you describe as “policing” its usage, but that just raises more questions about Facebook. The word “policing” is itself troubling. Removing harmful content shouldn’t carry a fear of being branded heavy handed or at odds with the spirit of free speech. It should be Facebook’s social responsibility as a firm.

Any lack of appreciation of progress from parents might be due to the selective way in which Facebook is prepared to be transparent. You said “149 billion messages” were shared on its platforms on New Year’s Eve, and yet, repeated requests from us to reveal how many children under 13 regularly use Facebook, WhatsApp and Instagram have never been met with a figure. I am particularly concerned by Facebook’s reliance on arguments around privacy in explaining its plans to encrypt Facebook Messenger and Instagram messages. This represents a real threat to children, who may come to harm when interacting with other users via these routes, with the authorities left with no real way of knowing or intervening.

READ  Lego takes on Samsung and Huawei with its own foldable - The Verge

You say Facebook takes material down “when it is reported to us”. Facebook still relies heavily on users reporting harmful material to the firm – self policing – by which time the content has already been seen. From what children tell us, there are still big issues here – ranging from a lack of response, or long waiting times for a response if it comes, to a lack of action excused with an explanation that the content doesn’t breach your terms and conditions. Children have told me many times that this is one their biggest issues. Indeed, many tell me platforms have so often been unresponsive in the past, they now no longer bother to alert you.

You have said: “There is nothing in the business model that lends itself to showing harmful and unpleasant, and offensive or dangerous, material to anybody.” What would be more positive would be to see your model recognise the commercial advantage of publicly tackling online harms. And the model does provide a disincentive to applying one possible solution. Social media platforms build huge user numbers by making it incredibly easy to join. There seems to have been little appetite from Facebook, Instagram or WhatsApp to retro-fit safety measures, such as age verification, lest they damage that ability to grow.





READ SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here