Facebook details scale of abuse on its site

Facebook said that 2.2 billion people used its service at least once a month as of March.

?Facebook says it deleted or added warnings to about 29 million posts that broke its rules on hate speech, graphic violence, terrorism and sex, over the first three months of the year.

It is the first time that the firm has published figures detailing the scale of efforts to enforce its rules.

Facebook is developing artificial intelligence tools to support the work of its 15,000 human moderators.

But the report suggests the software struggles to spot some types of abuse.

For example, the algorithms only flagged 38 per cent of identified hate speech posts over the period, meaning 62 per cent were only addressed because users had reported them.

By contrast, the firm said its tools spotted 99.5 per cent of detected propaganda posted in support of Islamic State, Al-Qaeda and other affiliated groups, leaving only 0.5 per cent to the public.

The figures also reveal that Facebook believes users were more likely to have experienced graphic violence and adult nudity via its service over the January-to-March quarter than the prior three months.

But it said it had yet to develop a way to judge if this was also true of hate speech and terrorist propaganda.

“As we learn about the right way to do this, we will improve the methodology,” commented Facebook’s head of product management, Guy Rosen.

Detection technology

On the latter, the company estimates about three per cent to four per cent of all active users on Facebook are fake, and said it had taken 583 million fake accounts down between January and March.

The figures indicate graphic violence spiked massively - up 183 per cent between each of the two time periods in the report. It said that a mix of better detection technology and an escalation in the Syrian conflict might account for this.

A total of 1.9 million pieces of extremist content were removed between January and March, a 73 per cent rise on the previous quarter.

That will make promising reading for governments, particularly in the US and UK, which have called on the company to stop the spread of material from groups such as Islamic State.

“They’re taking the right steps to clearly define what is and what is not protected speech on their platform,” said Brandie Nonnecke, from University of California, Berkeley’s Centre for Information Technology Research in the Interest of Society.

But, she added: “Facebook has a huge job on its hands.” The complexity of that job emerges when considering hate speech, a category much more difficult to control via automation.

The firm tackled 2.5 million examples in the most recent period, up 56 per cent on the October-to-December months.

Human moderators were involved in dealing with the bulk of these, but even they faced problems deciding what should stay and what should be deleted.

“There’s nuance, there’s context that technology just can’t do yet,” said Alex Schultz, the company’s head of data analytics. “So, in those cases we lean a lot still on our review team, which makes a final decision on what needs to come down.”