Âé¶¹ÒùÔº


Fear, more than hate, feeds online bigotry and real-world violence

Fear, more than hate, feeds online bigotry and real-world violence
Credit: AI-generated image ()

When a U.S. senator asked Facebook CEO Mark Zuckerberg, "" it was arguably the most important question that social networks face: how to identify extremism inside their communities.

Hate crimes in the 21st century follow a familiar pattern in which an online tirade escalates into violent actions. Before opening fire in the Tree of Life synagogue in Pittsburgh, the accused gunman had vented over far-right social about Honduran migrants traveling toward the U.S. border, and the alleged Jewish conspiracy behind it all. Then he declared, "." The pattern of extremists online has been a disturbing feature of some . But most online hate isn't that flagrant, or as easy to spot.

As I found in my 2017 study on , rather than overt bigotry, most online hate looks a lot like fear. It's not expressed in racial slurs or calls for confrontation, but rather in unfounded allegations of Hispanic invaders pouring into the country, black-on-white crime or Sharia law infiltrating American cities. Hysterical narratives such as these have become the preferred vehicle for today's extremists – and may be more effective at provoking real-world violence than stereotypical hate .

The ease of spreading fear

On Twitter, a popular meme traveling around recently depicts the "" spread across a map of the United States, while a Facebook account called "America Under Attack" shares an article with its 17,000 followers about the "Angry Young Men and Gangbangers" marching toward the border. And , countless profiles talk of Jewish plans to sabotage American culture, sovereignty and the president.

Fear, more than hate, feeds online bigotry and real-world violence
Credit: Chart: The Conversation, CC-BY-ND Source: HuffPost/YouGov

While not overtly antagonistic, these notes play well to an audience that has found in a place where they can express their intolerance openly, as long as they color within the lines. They can avoid the exposure that attracts. Whereas the white nationalist gathering in was high-profile and revealing, social networks can be anonymous and discreet, and therefore liberating for the undeclared racist. That presents a stark challenge to platforms like Facebook, Twitter and YouTube.

Fighting hate

Of course this is not just a challenge for social media companies. The public at large is facing the complex question of how to respond to inflammatory and prejudiced narratives that are stoking racial fears and subsequent hostility. However, social networks have the unique capacity to turn down the volume on intolerance if they determine that a user has in fact breached their terms of service. For instance, in April 2018, associated with white nationalist Richard Spencer. A few months later, Twitter suspended several accounts associated with the far-right group The Proud Boys for violating its policy "."

Still, some critics argue that the networks are not moving fast enough. There is for these websites to police the extremism that has flourished in their spaces, or else become policed themselves. A recent Huffpost/YouGov survey revealed that two-thirds of Americans wanted social networks to prevent users from posting "."

In response, Facebook has stepped up its anti-extremism efforts, reporting in May that it had removed "," over a third of which was identified using artificial intelligence, the rest by human monitors or flagged by users. But even as in November 2018, the company acknowledged that teaching its technology to identify hate speech is extremely difficult because of all the that can drastically alter these meanings.

Fear, more than hate, feeds online bigotry and real-world violence
Total percentages may not equal exactly 100 due to rounding. Credit: Chart: The Conversation, CC-BY-ND Source: Cato Institute

Moreover, public consensus about what actually constitutes hate speech is ambiguous at best. The conservative Cato Institute found about the kind of speech that should qualify as hate, or offensive speech, or fair criticism. And so, these discrepancies raise the obvious question: How can an algorithm identify hate speech if we humans can barely define it ourselves?

Fear lights the fuse

The ambiguity of what constitutes hate speech is providing ample cover for modern extremists to infuse cultural anxieties into popular networks. That presents perhaps the clearest danger: Priming people's racial paranoia can also be extremely powerful at spurring hostility.

The late communication scholar George Gerbner found that, contrary to popular belief, heavy exposure to media violence did not make people more violent. Rather, it made them , which often leads to corrosive distrust and cultural resentment. That's precisely what today's racists are tapping into, and what social networks must learn to spot.

The posts that speak of Jewish plots to destroy America, or black-on-white crime, are not directly calling for violence, but they are amplifying that can . That's precisely what happened in advance of the deadly assaults at a historic black church in Charleston in 2015, and the Pittsburgh synagogue last month.

Why do so many people watch violent TV and never commit a violent act?

For social networks, the challenge is two-fold. They must first decide whether to continue hosting non-violent racists like Richard Spencer, who has called for "peaceful ethnic cleansing," and . Or for that matter, Nation of Islam leader Louis Farrakhan, who recently compared Jews to termites, and continues to post to .

When Twitter and Facebook let these profiles remain active, the companies lend the credibility of their online communities to these provocateurs of racism or anti-Semitism. But they also signal that their definitions of hate may be too narrow.

The most dangerous is apparently no longer broadcast with ethnic slurs or delusional rhetoric about white supremacy. Rather, it's all over social media, in plain sight, carrying hashtags like #WhiteGenocide, #BlackCrimes, #MigrantInvasion and #AmericaUnderAttack. They create an illusion of imminent threat that radicals thrive on, and to which the violence-inclined among them have responded.

Provided by The Conversation

This article is republished from under a Creative Commons license. Read the .The Conversation

Citation: Fear, more than hate, feeds online bigotry and real-world violence (2018, November 20) retrieved 22 July 2025 from /news/2018-11-online-bigotry-real-world-violence.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Hate speech is still easy to find on social media

8 shares

Feedback to editors