G2TT
来源类型Opinion
规范类型评论
Social Networks Should Treat Far-right Extremists like Islamic State
Stephanie MacLellan
发表日期2017-08-19
出处2017
语种英语
摘要
AP Photo/John Bazemore
AP Photo/John Bazemore

In the aftermath of last weekend's deadly protests in Charlottesville, tech companies have been blocking far-right extremist groups from their services. This has led to a debate over freedom of expression on the Internet, and the role of companies such as Facebook, Twitter and Google in limiting it.

But if it was a different violent extremist group – say, the so-called Islamic State – there would be no debate. In fact, tech giants have been booting off IS supporters for more than a year and disrupting their networks on social media, and there has been no serious outcry.

Why should one group of violent extremists be treated differently than another?

Far-right domestic terrorists – including white supremacists, neo-Nazis and self-declared sovereign citizens who don't recognize government authority, among others – pose at least as much of a threat in North America as Islamic State terrorists. Of the 85 deadly terrorist attacks in the United States since 2001, 73 per cent were committed by far-right extremists, compared to 27 per cent by Islamist extremists. But it took the tragedy in Charlottesville, where one person was killed and several more injured after a car plowed into a crowd of counter-protesters, to thrust the threat of far-right violent extremists into the spotlight.

In the following days, the Daily Stormer neo-Nazi website was kicked off a series of web hosting providers. Facebook deleted a number of far-right pages with names like “White Nationalists United” and “Right Wing Death Squad”. The Discord chat app shut down several far-right groups. Even OkCupid, the online dating site, cancelled the account of one prominent white supremacist and banned him for life.

While many cheered, these developments also raised difficult questions about how far digital companies should go in silencing hateful content.

Attempts to police hate speech on online platforms often cause as many problems as they solve. As revealed by ProPublica, Facebook's policy on hate speech protects some identifiable groups, but not subsets of those groups. As a result, you can have your account suspended for directing vitriol at “white men”, but not at “Black children,” or in some cases “migrants.” Some Black activists and scholars of extremism have also complained that their social media posts detailing incidents of racism or explaining new terrorist propaganda have been blocked by various platforms. There are also concerns that harsh new anti-hate speech laws in Europe will result in companies taking down more legitimate content for fear of incurring massive fines.

On the other hand, tech companies seem to be more consistent and motivated when it comes to saving lives from terrorist attack, as seen in their responses to IS. After Twitter became notorious as the Islamic State's preferred medium for recruitment and propaganda, the company deleted more than 630,000 terrorist accounts between August 2015 and December 2016. This sustained campaign seems to be having an effect: A recent study from the VOX-Pol European think-tank found that networks of IS users on Twitter were decimated. Those who wanted to stay on Twitter adopted innocuous avatars and screen names and toned down the content of their tweets, which severely diminished their online identity and propaganda value. Facebook, Google and YouTube have also introduced new steps to remove terrorist content.

Rather than a matter of free speech on the Internet, the question of far-right extremism online should be seen as a matter of preventing ideologically-driven violence in the real world. That's the threat posed by IS supporters when they use Twitter to convince Western teenagers to join the so-called caliphate in Syria, and that's what Charlottesville organizers did when they used Facebook and Discord to plan their gathering of white supremacists.

Stifling expression, even hateful expression, should never be taken lightly, and tech companies should implement consistent and transparent policies for removing any kind of content. But when lives are at stake, inaction is not an option.

 

This article originally appeared in The Globe and Mail

".. the question of far-right extremism online should be seen as a matter of preventing ideologically-driven violence in the real world"
主题Conflict Management & Security, Internet Governance & Jurisdiction
URLhttps://www.cigionline.org/articles/social-networks-should-treat-far-right-extremists-islamic-state
来源智库Centre for International Governance Innovation (Canada)
资源类型智库出版物
条目标识符http://119.78.100.153/handle/2XGU8XDN/183898
推荐引用方式
GB/T 7714
Stephanie MacLellan. Social Networks Should Treat Far-right Extremists like Islamic State. 2017.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Stephanie MacLellan]的文章
百度学术
百度学术中相似的文章
[Stephanie MacLellan]的文章
必应学术
必应学术中相似的文章
[Stephanie MacLellan]的文章
相关权益政策
暂无数据
收藏/分享

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。