Social media firms must do more to tackle online Daesh hate factory

LONDON: Calls are mounting for social media companies such as Facebook and Twitter to do more to tackle online extremist material, as a new report from UK think-tank Policy Exchange finds that Daesh is producing more than 100 pieces of online content every week.
“For at least a year, the production of content has continued despite the death of key figures, loss of territory and ongoing fighting,” the report said.
The think-tank found that while extremists are increasing their use of the encrypted messaging service Telegram to communicate with each other, they have not abandoned other platforms such as Twitter and Facebook to spread their message.
Twitter accounts for 40 percent of identifiable traffic to extremist content online. Twitter is “a crucial gateway to the uninitiated — to those ISIS (Daesh) most hopes to target via its outreach,” it said.
The report outlined suggestions on how to tackle the problem, including a new law that would criminalize the “aggravated possession and/or persistent consumption of material that promotes hatred and violence, in the service of a political ideology.”
The aim would not be to criminalize every individual that ‘stumbles’ across extreme material, it said.
Policy Exchange also calls for tech companies to implement “more stringent” codes of conduct that “explicitly” reject extremism.
The establishment of a new independent regulator of social media content was a further recommendation. It also suggested a financial penalty system for UK-based subsidiaries of the tech companies, administered by the regulator to force compliance.
According to a survey conducted by the think tank, 74 percent of respondents would like to see legislation in place that criminalize the “persistent consumption” of extremist online content.
Approximately two-thirds of respondents believe the Internet should be regulated with extremist material controlled. Around 25 percent said it should be “completely free” without any limits of free speech.
In response to the report, Facebook said it was working “aggressively” to remove terrorist content from its platform.
“We’ve also built a shared industry database of ‘hashes’ — unique digital ‘fingerprints’ — of violent terrorist videos or images, which we’re actively expanding and is helping us to act on such content even more quickly,” said a Facebook spokesperson.
Twitter said on Sept.19 that it has suspended close to 300,000 accounts for violations related to the promotion of terrorism in the first half of this year. It said that 95 percent of those accounts were flagged by internal spam-fighting tools, while 75 percent of the accounts were closed down before their first tweet, the social media company said in its 11th biannual transparency report.
Google said it was “committed” to tackling online extremism. “Violent extremism is a complex problem and addressing it is a critical challenge for us all,” it said in a statement send to Arab News.
“We are making significant progress through machine learning technology, partnerships with experts, and collaboration with other companies through the Global Internet Forum — and we know there is more to be done.”
Telegram told Arab News in August that it takes down an average of 200 terrorism-related channels every day “before they can get any traction.”
Telegram did not respond to requests for comment other than to cite the company website which outlines its policy on terrorism.
The company argues that if you ban existing encrypted messaging services, extremist groups will easily switch to other methods of communication such as using coded language on any public channel or even making their own encrypted app.

Related Articles

LONDON: Calls are mounting for social media companies such as Facebook and Twitter to do more to tackle online extremist material, as a new report from UK think-tank Policy Exchange finds that Daesh is producing more than 100 pieces of online content every week.
“For at least a year, the production of content has continued despite the death of key figures, loss of territory and ongoing fighting,” the report said.
The think-tank found that while extremists are increasing their use of the encrypted messaging service Telegram to communicate with each other, they have not abandoned other platforms such as Twitter and Facebook to spread their message.
Twitter accounts for 40 percent of identifiable traffic to extremist content online. Twitter is “a crucial gateway to the uninitiated — to those ISIS (Daesh) most hopes to target via its outreach,” it said.
The report outlined suggestions on how to tackle the problem, including a new law that would criminalize the “aggravated possession and/or persistent consumption of material that promotes hatred and violence, in the service of a political ideology.”
The aim would not be to criminalize every individual that ‘stumbles’ across extreme material, it said.
Policy Exchange also calls for tech companies to implement “more stringent” codes of conduct that “explicitly” reject extremism.
The establishment of a new independent regulator of social media content was a further recommendation. It also suggested a financial penalty system for UK-based subsidiaries of the tech companies, administered by the regulator to force compliance.
According to a survey conducted by the think tank, 74 percent of respondents would like to see legislation in place that criminalize the “persistent consumption” of extremist online content.
Approximately two-thirds of respondents believe the Internet should be regulated with extremist material controlled. Around 25 percent said it should be “completely free” without any limits of free speech.
In response to the report, Facebook said it was working “aggressively” to remove terrorist content from its platform.
“We’ve also built a shared industry database of ‘hashes’ — unique digital ‘fingerprints’ — of violent terrorist videos or images, which we’re actively expanding and is helping us to act on such content even more quickly,” said a Facebook spokesperson.
Twitter said on Sept.19 that it has suspended close to 300,000 accounts for violations related to the promotion of terrorism in the first half of this year. It said that 95 percent of those accounts were flagged by internal spam-fighting tools, while 75 percent of the accounts were closed down before their first tweet, the social media company said in its 11th biannual transparency report.
Google said it was “committed” to tackling online extremism. “Violent extremism is a complex problem and addressing it is a critical challenge for us all,” it said in a statement send to Arab News.
“We are making significant progress through machine learning technology, partnerships with experts, and collaboration with other companies through the Global Internet Forum — and we know there is more to be done.”
Telegram told Arab News in August that it takes down an average of 200 terrorism-related channels every day “before they can get any traction.”
Telegram did not respond to requests for comment other than to cite the company website which outlines its policy on terrorism.
The company argues that if you ban existing encrypted messaging services, extremist groups will easily switch to other methods of communication such as using coded language on any public channel or even making their own encrypted app.

Go to Source