Javed AliTowsley Policymaker in Residence at the University of Michigan’s Gerald R. Ford School of Public Policy
Javed Ali is a Towsley Policymaker in Residence at the University of Michigan’s Gerald R. Ford School of Public Policy and had over 20 years professional experience in Washington, DC on national security issues, to include senior roles at the Federal Bureau of Investigation, Office of the Director of National Intelligence, and the National Security Council.
OPINION — Events from 6 January demonstrated the role of social media in galvanizing various individuals who gathered in Washington, DC to storm the US Capitol and plunge the country into a deep political and security crisis. As terrorism evolved in the post-9/11 era, researchers and scholars studied how international terrorist groups utilized different platforms to radicalize, mobilize, plan, and organize individuals to violent action. At the peak of its evolution in the mid-2010s, the Islamic State in Iraq and Syria (ISIS) ushered in an era of extremist content that had never been seen previously, and tech companies initially struggled to understand the extent of the ISIS saturation of their platforms, enforce their existing terms of service agreements, and design the right solutions to detect, identify, and remove violent extremist content. A few years later, due to a combination of stepped up enforcement of those terms of service agreements and internal measures to build teams and incorporate artificial intelligence, the overwhelming majority of ISIS-related accounts and material had been removed.
In the aftermath of the US Capitol siege, the tech companies collaborated to take aggressive steps to shut down accounts associated with President Trump and other individuals who have violated the terms of service agreements from companies like Twitter, YouTube, and Facebook based on posts or content that were assessed to glorify, condone, or incite violence—with some experts saying those steps were long overdue. At the same time, companies like Apple prevented the downloading of apps for platforms like Parler, that had looser rules for the hosting of violent extremist content, while Amazon also suspended its web-hosting services for Parler. The combination of these measures showed how tech companies could respond in a crisis and galvanize to action without direct government enforcement or oversight, to combat violent extremist content in the virtual environment.
While these steps delivered important results, they lacked a strategic framework for how private companies are postured to tackle this key national security challenge. There are several measures that if implemented in a more coordinated and integrated fashion throughout the tech community, could help address future threats.
First, there should be greater emphasis on information-sharing about violent extremist-related content on different platforms between tech companies, building upon the measures already in place in many organizations. As with post-9/11 reforms in the US national security community, the benefits of such information sharing helped disrupt plots at home and abroad.
Second, there should be a study of whether different terms of service agreements across tech companies can be more harmonized around definitions of what constitutes content that promotes violent extremist action.
Third, there should be deeper collaboration with federal law enforcement similar to “trip-wire” programs similar to those for bomb-making material or for the suspicious rental of vehicles that would allow investigators to determine whether initial investigative steps are necessary based on violent extremist content first identified by tech companies.
Fourth, tech companies should consider expanding industry consortiums like the Global Internet Forum for Countering Terrorism or the Cyber Threat Alliance, or even create new ones, to bring different stakeholders together across the spectrum of technology involved in the online space. These consortiums would be better advocates for their own private sectors interests and understand the unique requirements associated with the violent extremist content challenge, rather than having government-imposed solutions through executive orders or legislation— which has been debated over the last few years.
Critics of a tech company-focused approach as identified, will argue that private sector companies are not designed to operate with national security interests or the greater public good as their main objectives and that financial bottom lines and customer growth will always take precedence. They will also say that greater collaboration and information-sharing could reveal corporate vulnerabilities, damage brand reputation, or expose companies to legal liability for First Amendment-related concerns. Lastly, opponents will argue that some of these steps are too subjective, lack concrete definition, and will be open to interpretation that will allow some violent extremist content to remain active while other material is removed.
Like any complex problem, there are pros and cons to a more strategic and unified tech company approach towards this challenge. And even if these or other measures are instituted down the road, by themselves they will not be the only solution, since the federal government also needs to rethink its current counterterrorism paradigm to deliver better results given what appear to be breakdowns and gaps in intelligence analysis and information-sharing, crisis management, and physical security responses. Nevertheless, increased collaboration and coordination by the tech community resulting in more permanent and enduring measures would play a large role in preventing a repeat of what occurred at the US Capitol just a few weeks ago.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief