Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Shutterstock/Lenka Horavova
grey zone

Governments need to work with tech companies to curb hate speech, says Facebook anti-terror chief

In an interview with TheJournal.ie, Facebook’s counterterrorism chief said it was dealing with grey areas at present.

TECH GIANTS LIKE Facebook, Google and Twitter have built empires on the appeal of social media to consumers and businesses but the invasion of hate speech and terrorist activity on their platforms is a major growing concern, according to Facebook’s counter terrorism chief. 

Dedicated teams have been established within social media companies to fish out and remove hate-based and offensive content as reports from NGOs in recent years have pointed to a rise in these types of events. 

Last year, the Irish Council of Civil Liberties reported that Ireland has some of the highest rates of hate crimes in the EU. A public consultation is underway to look at new measures to reduce the rising number of hate crimes in Ireland.

Addressing members of the Institute of International and European Affairs (IIEA) in Dublin this week, Facebook’s counterterrorism chief Dr Erin Marie Saltman described the challenge of catching and removing all hate speech and terrorist organisation propaganda as an “impossible task”.

In the first three-quarters of the year, Facebook removed some 18 million pieces of terrorist content – including images, posts, and commentary from individuals or organisations it deems to be inciting hate or violence, or engaged in terrorism. 

A challenge lies, however, in how governments define hate speech and Saltman said greater cooperation between companies and governments could help regulate harmful content on their platforms. 

Dr Erin Marie Saltman,  Facebook’s Policy Lead for Counterterrorism and Dangerous Organizations , EMEA (1) Dr Saltman speaking in Dublin. Lorcan Mullally, IIEA Lorcan Mullally, IIEA

In an interview with TheJournal.ie, she said Facebook was forced to develop its own policies in areas of hate speech and terrorism, and to self-regulate, in the absence of rigid policies and measures at government levels. 

“I don’t think there is any bad actors in this space and despite what headlines might say, behind the scenes we usually have very good working relationships with law enforcement or the Government officials that are actually working on these topics,” she said. 

“We’re having to make those calls and terrorism is one thing and we’ve pointed out, that even terrorism can be ill-defined. You’ll see in one statement terminology will jump from ‘terrorism’ to ‘violent extremism’ to ‘extremism’.

We need clarity if governments are going to paint broad-brush statements about extremism, it’s a very grey term, so especially if we’re going to get refinement and legislation.

“We just want to know where we stand and what they actually mean or else we’re going to be over-compensating and we don’t want to remove a vast horde of speech that is otherwise completely uncomfortable and not violent.”

In October, a panel of TDs and senators questioned representatives from social media companies about their approach to hate speech content.

One of the issues raised at that meeting was the racial abuse directed towards the Ryan family from Co Meath, who were forced to leave Ireland after social media users posted abusive comments about them for appearing in a Lidl supermarket ad. 

Facebook says it regulates hate-based content via policy teams in a number of countries as well as relying on its users to flag offensive or hate-based content. 

It uses the UN Security Council Consolidated list of terrorist groups, along with lists compiled by the EU and the US as a basis for its policies around what should and should not be removed. 

White supremacy

But these lists come with a “huge bias towards Islamist extremist terrorism” putting the regulation of other groups and organisation solely in the hands of tech companies to define what other content should or should not be removed. 

“Quite frankly, if you look towards any of those lists there is a huge bias towards Islamist extremist terrorism,” Saltman said. 

“It’s very much a post-9/11 paradigm and when we look at any alternative forms, whether that’s Irish-listed terrorist groups that we want to make sure aren’t being celebrated and exploited on our platform, or others like white supremacy groups.”

“We have a public policy team just for Ireland and increasingly we have them in a lot of countries. They will have a person or an entire team just focused on that country.

“And between them, and some of the operations teams that focus on seeing some of the content that might arise every now and again, they point things out to us.”

The Irish Government is currently looking at ways in which the prevalence of hate speech can be reduced and eliminated with a public consultation launched last month.

Oireachtas committees have repeatedly met with representatives from tech companies to address how hate speech is being circulated online and how that impacts and incites hatred or violence in the real world. 

A better understanding of what governments think should be considered as hate speech in their jurisdictions – what Facebook term as ‘culturally sensitive’ hate speech – would provide a better understanding of what it should or shouldn’t be removing from its platform, says Saltman.

It’s also really interesting because we don’t always know a government line on things either. 

“We think it’s very fascinating in Germany right now that they’re encouraging people to document hate speech [...] which might violate German law and that gets used in hate cases in court. 

“So it would be good to know what the ramifications are beyond that. If the government is outraged at the hate speech, are they doing anything about it above and beyond what just exists?”

Culturally-sensitive

Groups which are flagged to Facebook as being considered dangerous or of inciting hate are reviewed by internal teams but without input from governments to help define what is considered culturally-sensitive, grey areas remain. 

In the meantime, the tech giant relies on a combination of teams, follow-ups to media reports, AI technology and flagged content by users to tackle hate speech and propaganda posted online. 

“Sometimes [we see it] because the media has spoken to or documented crimes that have taken place in the real world. For dangerous organisations, we’re looking at behaviours and that doesn’t have to be behaviours just on our platform. We’re looking at both online and offline behaviours of a group.

“There’s a lot of hearsay with affiliations… I might go to a country in Africa and they could claim a group is a terrorist group but you look more closely and it’s a journalist network or it’s an opposition party and they’re asking us to interfere with or take out a certain voice in society.

“So we’re looking at behaviours, for terrorism we’re looking at premeditated violence, we’re looking at the fact that their violence is meant to cause an ideological or religiously motivated goal.

We just need that the evidence is very clear. We also know there are lots of political parties where they might have one or two extremist voices… We want to make sure we have our facts totally straight.

“It doesn’t mean we can’t take down an entity just because they’re a political party. If they’re showing all the behavioural trends and it’s well documented then we can make the move on that.”

Other organisations are also looking at the issue of interference in political circles. Twitter recently moved to ban political ads, including any ad that references a candidate, political party, government official, ballot measure, or legislative or judicial outcome.

The challenge of removing terrorist content online doesn’t only lie with tech giants but also with lesser-known or less-regulated sites which have become a breeding ground for terrorist and hate-based groups. 

“I think we already know, and academic research is showing this as well, [...] that the better we get at getting the obvious stuff, then you see the obvious groups get better and try to change their behaviour to try and tow the line, and not get kicked off the platform. 

“Or they are migrating to smaller, less regulated platforms and we see that in a lot of cases with a lot of hate-based or terrorist organisations.

“So the multi-platform aspect of it is what is going to become more difficult for us in the future. We’re seeing that we’re part of the story, and then we’ll see it jump to another platform, or start on another platform and jump back to Facebook. 

“We need a more cross-platform, international lens into what the trends are with threat.”

The Global Internet Forum for counter-terrorism, founded by Youtube, Twitter, Microsoft and Facebook in 2017 sees the four companies share counter-terrorism information with each other to prevent terrorist content being shared on their platforms. 

Your Voice
Readers Comments
37
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel