Amid growing criticism of its content policies and data privacy issues, Facebook is also updating oversight training policies for staff who review content on the website.
Facebook Inc., is reviewing how it handles child abuse videos and preventing underage children from holding accounts, as the social network grapples with concerns that it isn’t doing enough to protect users.
The tech giant is reviewing its policy of not removing videos of so-called non-sexual child abuse videos that are designed to condemn such behavior and the child-in-question is still at risk, Facebook Ireland’s head of public policy Niamh Sweeney said at an Irish parliamentary hearing Wednesday.
The company is also working to update “guidance for reviewers to put a hold on any account they encounter if they have a strong indication” that it belongs to a child under 13 years of age.
Sweeney appeared before lawmakers in Dublin alongside the firm’s head of content policy for Europe Siobhan Cummiskey, amid alleged issues around Facebook’s content moderation policies highlighted in a documentary broadcast by the United Kingdom’s Channel Four Television Corp. She reiterated the company’s apology for “failings” identified in the documentary.
Sweeney’s comments come as the world’s biggest social network faces weak user growth and criticism of its content policies and data privacy issues. Its shares fell by about a fifth last week after second-quarter user numbers and revenue missed market expectations.
It was “mistake” not to remove a video of “a three-year-old child being physically assaulted by an adult,” Sweeney said. Facebook only allows that type of video to be shared if it is to “to condemn the behavior and the child is still at risk and there is a chance the child and perpetrator could be identified to local law enforcement as a result of awareness being raised.”
Sweeney denied maintaining Facebook’s stock price played a role in how the company handles content.
On how it handles hate speech, Facebook is “increasingly using technology to detect hate speech on our platform which means we are no longer relying on user reports alone,” Sweeney said.
“Of the 2.5 million pieces of hate speech we removed from Facebook in the first three months of 2018, 38 per cent of it was flagged by our technology,” she added.
The company is also updating oversight of training for staff who review content on the website.- Bloomberg