Content identified as hate speech on Facebook rises 389% in 1 year, but network says views have fallen

0
(0)

Content identified as hate speech on Facebook rises 389% in 1 year, but network says views have fallen

Platform says it removed or decreased the reach of 26.9 million hateful posts around the world during the 4th quarter of 2020.

Facebook released on Thursday (11) its 4th Quarter 2020 Community Standards Transparency Report, which reveals the amount of harmful content that was identified on its platforms in the period.

The social network removed or reduced the reach of 26.9 million hate speech content around the world between October and December 2020.

The number represents an increase of 21% compared to the previous quarter and 389% if compared to the same period of 2019, when 5.5 million posts were identified.

The company attributed the rise in total content to updates on moderation technologies in Arabic, Spanish and Portuguese. The period between October and December also marks the election season in the USA.

Fewer hate speech views

Despite the high, the platform says that the prevalence of hate speech has decreased, an estimate of the percentage of times people have viewed content that are outside the rules of the social network.

For every 10,000 views of content, 7 to 8 of them contained some material with hate speech, according to the data.

The frequency is lower compared to the previous quarter, when this metric was adopted – at the time, for every 10,000 views of content, 10 to 11 of them had some hate material.

Use of artificial intelligence

Facebook said the “proactive” moderation rate, when an artificial intelligence is able to label the publication or comment before someone makes a report, has improved in “problem areas”, more specifically in bullying and harassment.

The “proactive” rate in this category went from 26% in the third quarter to 49% in the fourth quarter on Facebook and from 55% to 80% on Instagram, according to the report.

The social network said that the ability to review human content is still affected by the pandemic, and that moderators focus on what the platform considers “most harmful content”, such as those about suicide and self-harm.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.