Years after similar attacks, technology firms are still playing a role in spreading extremist ideas — from hosting the forums that radicalize young men to helping make them famous among a burgeoning White-supremacist movement online. New laws from Europe forcing tech firms to make their sites safer couldn’t come fast enough, though there may be some stumbling blocks in their way.
The shooter in Buffalo harnessed online platforms to make his plans a nightmare reality. He arranged his attack on Discord Inc., published his manifesto on Alphabet Inc.’s Google Docs and then livestreamed the shooting on Amazon Inc.’s Twitch. A community of supporters then spread clips from his video across Meta Platforms Inc.’s Facebook, Twitter Inc., Instagram and other platforms.
This is how the feedback loop between mainstream social media and underground forums works, sweeping people toward extremist ideas like the racist “great replacement” theory.
In his manifesto, the attacker credited the image board 4chan as being where, over the course of two years, he “learned through infographics, shitposts and memes that the White race is dying out.” With about 20 million active users, 4chan isn’t large enough to be subject to upcoming European regulations on hazardous content, but other sites like it have been taken offline when their infrastructure providers came under too much political pressure.
More than a decade ago, 4chan was best known for originating trends in internet culture and sparking the rise of hacktivist network Anonymous, which hit organizations with cyber attacks as a form of protest. Today, it has morphed into something more menacing, a breeding ground for far-right propaganda and bigotry, with regular discussions of real-world violence.
It would be easy for the big social-media companies to point to 4chan as being the real problem for where most radicalizing happens. But clips from a livestream of Saturday’s shooting have proliferated across Facebook, Twitter and Instagram to be watched by millions. One link to the video was shared more than 46,000 times on Facebook, and the company didn’t remove it for more than 10 hours, according to the Washington Post. Facebook said people were trying to circumvent its rules to share the video, while Twitch said it had removed the video less than two minutes after it was streamed. Discord said it had removed the shooter’s private server immediately after hearing about it.
The attacker himself was a copycat who followed a basic formula of others who’ve gone viral on social media. His decision to livestream himself on Twitch emulated previous attacks, such as the 2019 mass-shooting that killed 51 people at a mosque in Christchurch, New Zealand and was livestreamed on Facebook. About a quarter of his manifesto was plagiarized, with the main source being the manifesto of the Christchurch terrorist.
After 2019, tech firms urgently sought to stop such videos from spreading. One strategy the companies use is to assign a mathematical string or “hash” to a video and then use algorithms to track the clips down. But that effort has only partially worked.
Videos with hundreds of thousands of views of the Buffalo shooting could still be found over the weekend on Facebook, Instagram and Twitter, according to Ciaran O’Connor, an online extremism researcher with London’s Institute for Strategic Dialogue. O’Connor said the videos he found with the most views were on Twitter, with one reaching nearly 460,000 hits. Elon Musk has yet to say anything about the shooter’s video or how he would handle its proliferation on Twitter were he to buy the platform.
“The danger of the video remaining online for longer is that it allows people to download the content and put it on other sites,” said O’Connor.
In the month after the Christchurch video went viral, for instance, internet users took a short clip of the shooter pointing his assault rifle at a man standing in front of the mosque, ending just as the gun fired, and turned it into a looping GIF which became popular on multiple forums. Others overlaid images from video games onto the original video, said O’Connor, while some extremists created rudimentary versions of the mosque on the online games Minecraft and Roblox to gamify the attack.
It’s possible the Buffalo’s terrorist won’t get the same kind of worship as others before him. But his online legacy will contribute to the same contagion that reached him in the first place.
A big part of the solution to extremist content online is regulation, but it is not coming from the US, where politicians and technologists seem to be doing the opposite of trying to stem harmful content. Elon Musk, for one, has proposed loosening content moderation rules on Twitter to allow greater “free speech” on the platform if he buys it.
First Amendment doctrine also prevents U.S. lawmakers from banning almost any speech, including racist and homophobic commentary, paranoid conspiracy theories and shooters’ manifestos. Pushing that further, the state of Texas has passed a bizarre new law known as HB 20, which will open the door to suing social-media companies that take down content, an effort by lawmakers to rein in supposed censorship. The legislation is being appealed by tech industry groups, who say it could allow hate speech to proliferate further on social media. It might even prevent platforms like Twitch from taking down a livestream of a mass shooting in future. The Supreme Court is preparing to decide whether the law is constitutional.
The world’s best hope for weeding out extremism on mainstream social media is coming from Europe, and specifically from two new laws — the Online Safety Act from the United Kingdom and Digital Services Act from the European Union. The rules, which will come into effect in the next few years, will force tech firms to conduct regular risk assessments on their algorithms and more quickly rein in harmful content — or risk paying fines worth 6% of their global turnover. Facebook whistleblower Frances Haugen has said Europe’s DSA could set the global “gold standard” for tech regulation.
4chan, the image board that radicalized the Buffalo shooter, has a loophole here: The EU’s DSA applies to online platforms with 45 million or more regular users, and 4chan is about half that size. But it does rely on mainstream web companies to stay online — without that support, it could be harder for 4chan to attract as many visitors as it does.
For instance, after two different racially motivated mass shooters posted manifestos on the image board 8chan(1), web security company Cloudflare terminated its cyber protection services for the site, leading it to be shunted offline. Several other web infrastructure companies pulled their services too, and 8chan no longer shows up in Google searches. Neo-Nazi website The Daily Stormer was also dropped by mainstream services in 2017.
It may be impossible to wipe such sites off the web completely, but making them harder to find can help prevent more young men from falling into extremist rabbit holes. That means their mainstream tech providers are culpable too. They should act accordingly.
More From Bloomberg Opinion:
• Omicron Is Turning Out to Be a Weak Vaccine: Lisa Jarvis
• Brexit Threats in Wartime Are Doubly Wrong: Lionel Laurent
• Elon Musk Misses the Big Picture on Lithium Mining: Anjani Trivedi
(1) The Christchurch, NZ shooter posted his manifesto to 8chan in March 2019. A mass shooter who targeted a Walmart in El Paso, Texas posted a similar White-supremacist manifesto on the site the following August.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”
More stories like this are available on bloomberg.com/opinion