This site uses cookies to improve your experience and to provide services and advertising. By continuing to browse, you agree to the use of cookies described in our Cookies Policy. You may change your settings at any time but this may impact on the functionality of the site. To learn more see our Cookies Policy.
OK
#Open journalism No news is bad news

Your contributions will help us continue to deliver the stories that are important to you

Support The Journal
Dublin: 17 °C Friday 7 August, 2020
Advertisement

'An impossible task': Facebook counter-terrorism chief on trying to rid social media of extremist content

Dr Erin Marie Saltman was speaking during a visit to Dublin this afternoon.

Dr Saltman speaking in Dublin today.
Dr Saltman speaking in Dublin today.
Image: Lorcan Mullally, IIEA

FACEBOOK’S HEAD OF counter-terrorism for Europe, the Middle East and Africa has said the tech giant faces the same challenges that governments face in the effort to tackle the rise of terrorist content and hate speech shared on its platform. 

In the first three-quarters of the year teams behind the social media platform removed over 18 million pieces of terrorist content.

Facebook uses AI technology to automatically remove images and other media that have been previously flagged as relating to terrorism or hate speech, as well as employing internal staff to monitor activity in this area. 

It also relies on users who flag content that might be considered hate speech. 

Dr Erin Marie Saltman, speaking at an event at the Institute of International and European Affairs in Dublin, described the challenge of ensuring this content does not appear online as an “impossible task” despite ongoing collaborations within the industry.

“Our policies are having to tackle everything governments have to tackle, everything from what you do when someone passes away… all the way to what is the line between offensive humour and hate speech, and that’s really culture-specific sometimes,” she said. 

“All of our policies are meant to be global and they kind of have to be because what do you do if content originates in Singapore, and somebody in France is sharing that with somebody in the US. What jurisdiction is that in? It’s a bit of an impossible task.”

Terrorist organisations

Almost 90% of Facebook users are living outside the US and Canada. In a bid to track the activity of terrorist organisations on the social media platform, Facebook is using the terrorist lists defined by the US, the UN and the EU. 

It also has its own policy to identify content from organisations which do not appear on those lists, but a challenge remains in defining what a terrorist or hate group is, and how that varies from country to country. 

“We actually have our own definition of terrorist organisations and terrorists as well as what we consider a hate-based organisation… we do have to go above and beyond to act quickly and part of the reasons for that are things like the New Zealand attack or the Halle, Germany attack. 

“With some of these non-traditional terrorist groups you might have the terrorist actor but he or she is not attached to a terrorist organisation as such,” Saltman added. 

“Usually these white supremacy terrorist attackers for example are not going to be on any of these lists.”

Under the Global Internet Forum for counter-terrorism, founded by Youtube, Twitter, Microsoft and Facebook in 2017, the four companies share counter-terrorism information with each other to prevent terrorist content being shared on their platforms. 

“Our common ground for a definition of terrorism is the UN list, because we might have our own definition, other companies might have no definition. Other companies might just say we take down terrorism but not even touch how they’re defining that. 

“Maybe in your internal teams, bomb-making material might seem a little more dangerous than your generic weird fanboy terrorist content, there’s a lot of weird stuff out there.

“It’s not perfect, we cannot say we will not allow any terrorism at any point, we’re just trying to prevent as much as possible, and we do see a difference of being preventative and having tools that go across platforms that can work to this.”

Saltman said the company wanted to see more collaboration between governments to define what they consider to be hate speech or harmful content. 

#Open journalism No news is bad news Support The Journal

Your contributions will help us continue to deliver the stories that are important to you

Support us now

Social media companies have been at odds with governments, including the Irish government, over hateful content and delays in removing them from their platforms. 

Both Facebook and Twitter were questioned by members of the Oireachtas Justice Committee over the handling of racist comments which were posted online and directed toward the Ryan family from Co Meath earlier this year. 

  • Share on Facebook
  • Email this article
  •  

Read next:

COMMENTS (14)

This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
write a comment

    Leave a commentcancel