#Open journalism No news is bad news

Your contributions will help us continue to deliver the stories that are important to you

Support The Journal
Dublin: 0°C Sunday 7 March 2021

Facebook says it's flagged 50 million misleading coronavirus posts since March

The number of Instagram posts related to suicide or self-harm Facebook took action against increased by 40%, it said.

Image: Shutterstock/Allmy

MORE THAN 50 million pieces of content were given warning labels on Facebook for being misleading in relation to coronavirus, the social network has revealed.

Publishing its latest Community Standards Enforcement Report, Facebook said that since 1 March, it had removed more than 2.5 million pieces of content linked to sale of medical items such as masks and Covid-19 test kits.

The social network also revealed its Covid-19 Information Centre, which shows health and virus information from official sources, had now directed more than two billion people to resources from health authorities.

Social media platforms, including Facebook, have been repeatedly criticised over the amount of disinformation and harmful content linked to the Covid-19 outbreak which has spread online.

A number of charities and online safety organisations have warned that with people – particularly children – spending more time online during lockdown, more stringent monitoring of online platforms is needed.

In its Enforcement Report, Facebook said its detection technology was now finding around 90% of the content the platform removes before it is reported to the site.

Screenshot 2020-05-13 at 09.25.17 Source: Facebook.com

 Among the other actions taken against posts on Facebook, the social media giant said that it doubled the amount of hate speech content it ‘took action’ on in the space of three months: from 5.7 million pieces of content removed in Q4 2019, to 9.6 million removed in Q1 2020.

Taking action could mean removing the content, putting a warning on the post to mark it as harmful or add a FactCheck next to a piece of content that is misleading.

It’s not clear if this increase in hate speech being removed is as a result of Facebook’s actions, or if there’s an increase in the amount of hate speech being posted on Facebook.

Screenshot 2020-05-13 at 11.48.50 Source: Facebook.com

Facebook said that it took action on about 2 million pieces of suicide or self-harm content in Q2 2019, of which 96.1% was detected before it was flagged.

“We saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3% we detected proactively,” it said. 

The report introduces Instagram data in four issue areas: Hate Speech, Adult Nudity and Sexual Activity, Violent and Graphic Content, and Bullying and Harassment. 

#Open journalism No news is bad news Support The Journal

Your contributions will help us continue to deliver the stories that are important to you

Support us now

Screenshot 2020-05-13 at 09.25.43 Source: Facebook.com

On Instagram, which is also owned by Facebook, the amount of suicide and self-injury content it took action against increased by 40%, the company said.

On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.

As part of the announcements, Instagram confirmed several new features designed to combat bullying and unwanted contact on the platform.

The app confirmed users will be able to choose who can tag and mention them in posts and delete negative comments in bulk, as well as block or restrict multiple accounts that post such comments at once.  

- with reporting from Gráinne Ní Aodha

About the author:

Press Association

Read next:


This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
write a comment

    Leave a commentcancel