Why domestic terrorists are so hard to police online



Domestic terrorism has proven to be more difficult for Big Tech companies to police online than foreign terrorism.

The big picture: That’s largely because the politics are harder. There’s more unity around the need to go after foreign extremists than domestic ones — and less danger of overreaching and provoking a backlash.

Flashback: When tech firms came together in 2017 to tackle foreign terrorism, the bipartisan view that ISIS was a serious national security threat allowed them to take blunt actions, often relying on automation, to weed out gruesome terrorist content without fear of political retribution if they inadvertently over-censored content.

Today, tackling domestic extremism is a politically-charged debate. It’s not always clear where the line is between a fringe group with protected free speech rights and a group that’s actively plotting attacks like the one on the Capitol.

“The reason companies are less likely to use sweeping automation is because the inventible overreach would mean they would be politically punished in a way that wasn’t true for ISIS,” says Alex Stamos, director of the Stanford Internet Observatory and former Facebook Chief Security Officer. “ISIS had no domestic constituency.”

“It would create a massive backlash and censorship that we haven’t seen on the Internet before,” says Mark Little, CEO and co-founder of disinformation tracking firm Kinzen.

Be smart: One of the key differences between ISIS and today’s domestic extremists is that being part of ISIS, a group that designates its members, is effectively illegal, says Stamos. Being a follower of a fringe-right group like QAnon or Proud Boys isn’t inherently illegal — they’ve done nothing wrong until they become a part of a violent or forceful conspiracy.

Case in point: Two QAnon supporters currently hold seats in Congress. One, Rep. Marjorie Taylor Greene of Georgia, was temporarily suspended by Twitter.

Driving the news: Social media firms are scrambling to crack down on domestic terrorist threats ahead of the inauguration and beyond, but experts say their efforts are only temporary solutions.

“There have been countless fact-checking and other efforts designed to rid social media of misinformation,” argues Ethan Zuckerman of the University of Massachusetts-Amherst. “They’re not going to work until the party and the major ideological amplifiers start explicitly renouncing these points of view.”

Instead of trying to narrowly target pieces of problematic content, as was the case with foreign terrorism, tech platforms are now trying to target and remove broad movements by relying on experts to study their behavior.

This requires much more human intelligence, tracking the movements of fringe groups at the ground level, as opposed to relying mostly on automation.

“In the past, egregious content could be detected using the same types of tools used to target spam or copyright infringement,” says Little, formerly a Twitter executive. But today, fringe actors will find a ways to continue adapting their language and techniques to avoid detection online, making this strategy less effective.

For example, participants within the fringe anarchist Boogaloo movement will change the terms they use to refer to themselves online to avoid detection, per Little.

The QAnon conspiracy continuously reinvents itself to keep people hooked, says Little’s colleague Shane Creevey, head of editorial for Kinzen.

One example he notes is when the movement attached itself to the #SavetheChildren hashtag, making it harder to take action on the movement without inadvertently cracking down on good speech.

The big picture: Even when big tech companies do take action on these types of movements — Twitter and Facebook have both effectively banned QAnon — the lack of industry-wide coordination around the issue means fringe movements can easily migrate to smaller platforms, and often darker corners of the web.

One solution would be for tech companies to create a coalition to address domestic extremism specifically, similar to the one they aimed originally at foreign terrorists in 2017 with the creation of Global Internet Forum to Counter Terrorism (GIFCT).

GIFCT relies on large platforms to share information, including graphic images, memes, videos, etc. about terrorist threats with smaller digital players to effectively keep terrorist content off of the internet. The smaller companies are able to use AI to match bad imagery to content as it is uploaded and block it before it gets any views.

No such coalition exists in the same fashion today for white supremacy, although the big platforms are in regular touch with each other and law enforcement on this issue.

What to watch: Sources say *the Trump administration’s threat of unilateral action against Big Tech companies* spooked them out of taking bolder stances on fringe right extremism.

But Stamos says the Biden administration is less likely to abuse its power in the same way. “I expect them to use their investigative power on real policy issues, not because they dislike moderation decisions.”

The bottom line: When it comes to domestic terrorism, “You can’t solve this with an algorithm,” says Little.