Our cyber-technology allows us to find information, friends, jobs, recipes, and just the right GIF to express ourselves. But the uncensored nature of the digital world also allows—and perhaps encourages—people to express ideas that some find offensive and extreme. While extremism has always been with us, the Internet exposes more people to ideas that advocate violence. Whether it’s an Islamic State video, an anti-government blog, or Dylann Roof’s manifest posted before he committed the 2015 Charleston church massacre, authorities recognize that extremism potentially threatens our national security. Social media has obviously increased the ability of extremists to reach millions—or even billions—of people, but is it also an important tool in the radicalization process?
To understand social media’s role in spreading extremism, my colleagues and I have conducted three web surveys, each with approximately 1,000 participants ages 15-35. We collected data in 2013, 2015, and 2016, and the data reflect America demographically in terms of race, gender, and region. We have investigated what factors place one at risk of seeing extremism and what predicts being involved in producing such materials.
Risks, Rates, and Correlates of Exposure
Exposure to online extremism may not be victimizing, per se. Some people actively search for these materials, and these people are not “victims” in the traditional sense of the word. However, we found that nearly half of those exposed to these materials arrived at the site inadvertently, and another quarter of respondents migrated to the sites because of a link they were sent by a friend or acquaintance (Hawdon et al. 2014). While these individuals are potential victims, we still should avoid overstating the dangers these materials pose. Many people view them without experiencing negative consequences. Nevertheless, others exposed to these materials are disturbed by them, and exposure can lower levels of trust (Näsi et al. 2015) and wellbeing (Keipi et al. 2017). And, in rare cases, exposure to online hate materials is directly linked to violence, including acts of mass violence and terror (Foxman and Wolf 2013).
While most people who express extremist ideas do not call for violence, many do. In 2015, about 20 percent of the messages people saw online openly called for violence against the targeted group; this number nearly doubled by 2016 (Costello et. al. 2016; Hawdon 2017). Given that the radicalization process often begins with simply being exposed to extremist ideologies, government authorities in the U.S. and around the world are understandably concerned (U.S. Department of State 2015).
Since 2013, the number of young Americans seeing extremist messages has increased, and younger people are particularly vulnerable. The percentage of people between the ages of 15 and 21 who saw online extremist messages increased from 58.3 percent in 2013 to 70.2 percent in 2016. While extremism comes in many forms, the growth of racist propaganda has been especially pronounced since 2008: Nearly two-thirds of those who saw extremist messages online said they involved attacking or demeaning a racial minority (Hawdon 2017).
Those who frequently use social networking sites and spend long hours on the Internet are more likely to see these materials (Hawdon et al. 2014). Using specific Internet services increases exposure too, as those who used YouTube and photo-sharing sites were more likely to be exposed. Other individual-level correlates of exposure include having low levels of trust in the government (Costello et. al. 2016).
We also compare exposure rates across nations using samples from Finland, Germany, the United Kingdom, and the United States. Based on these data, Americans and Finns had the highest rates of exposure, while British and German youths had the lowest rates of exposure. Nearly twice as many Americans were exposed to extremism than were Germans. One plausible reason for these varying rates is the different legal approaches toward regulating hate speech (Hawdon, Räsänen, & Oksanen 2017). While America places primacy on protecting free speech, the European nations have stricter hate speech laws. Germany, in particular, strictly enforces its hate speech laws. Therefore, the patterns of exposure correspond to the stringency with which hate speech laws are enforced. While this is a possible explanation for this pattern of exposure, additional research is needed to determine if this relationship holds across other samples, at other times, and in other nations.
While social media appears to have increased exposure to these materials, has it influenced the production of these materials? We cannot say for sure, but it appears something has. When we began our research in 2013, only 7 percent of respondents admitted to producing online material that others would likely interpret as hateful or extreme. Now, nearly 16 percent of respondents admit to producing such materials (Hawdon 2017). The people posting these materials tend to be economically disadvantaged, white, deeply involved in an online community, close to a religious community, but not close to friends and family members. In addition, those who believed their group had been the target of online hate speech were eight times more likely to produce online extremist materials than were those whose group had not been targeted (Hawdon 2017).
We hypothesize that social media may be amplifying extremist ideologies and leading to more involvement in extremist causes. It is now common practice for social networking sites to collect user’s personal information, with search engines and news sites using algorithms to learn about our interests, wants, desires and needs—all of which influences what we see on our screens. This process can reinforce our preexisting beliefs, while information that challenges our assumptions or points to alternative perspectives rarely appears (Pariser 2011).
Every time someone opens a hate group’s website, reads its blogs, adds its members as Facebook friends, or views its videos, the individual becomes enmeshed in a network of like-minded people espousing an extreme ideology. In the end, this process can harden worldviews that people become comfortable spreading (Hawdon 2012).
While all of this may seem bleak, people do fight back. Over-two thirds of respondents report that when they see someone advocating hate online, they tell the person to stop or defend the attacked group (Costello, Hawdon, & Cross 2017). Although enacting this online social control can lead to one being personally attacked online (Costello, Hawdon, & Ratliff 2016), perhaps these acts of social control can convince extremists that, somewhat ironically, a tolerant society does not tolerate extremist ideologies. This may create a more tolerant virtual world, and, with luck, disrupt the radicalization of the next perpetrator of hate-based violence.
James Hawdon is a professor of sociology and director of the Center for Peace Studies and Violence Prevention at Virginia Tech. His research focuses on how communities facilitate or inhibit violence. Most recently, he has investigated how online communities influence online extremism, online aggression, and cyberhate.
Costello, M., Hawdon, J., & Cross, A. (2016). Virtually Standing Up or Standing By? Correlates of Enacting Social Control Online. International Journal of Criminology and Sociology, 6, 16-28.
Costello, M., Hawdon, J., & Ratliff, T.N. (2016). Confronting Online Extremism: The Effect of Self-Help, Collective Efficacy, and Guardianship on Being a Target for Hate Speech. Social Science Computer Review, 0894439316666272.
Foxman, A. & Wolf, C. (2013). Viral hate: Containing its spread on the Internet. New York: Macmillan.
Hawdon, J. (2012). Applying differential association theory to online hate groups: a theoretical statement. Journal of Research on Finnish Society, 5, 39e47.
Hawdon, J. (2017). Perpetrators and victims of online extremism: Status and vulnerability. Presented at Les jeunes et l'incitation à la haine sur Internet: victimes, témoins, agresseurs? Comparaisons internationales. Nice, France. January 24, 2017.
Hawdon, J., Oksanen, A., & Räsänen, P. (2014). Victims of online hate groups: American youth’s exposure to online hate speech. In J. Hawdon, J. Ryan, & M. Lucht (Eds.), The causes and consequences of group violence: From bullies to terrorists (pp. 165e182). Lanham, MD: Lexington Books.
Hawdon, J., Oksanen, A., & Räsänen, P. (2017). Exposure to online hate in four nations: A cross-national consideration. Deviant behavior, 38(3), 254-266.
Keipi, T., Oksanen, A., Hawdon, J., Näsi, M., & Räsänen, P. (2017). Harm-advocating online content and subjective well-being: a cross-national study of new risks faced by youth. Journal of Risk Research, 20(5), 634-649.
Näsi, M., Räsänen, P., Hawdon, J., Holkeri, E., & Oksanen, A. (2015). Exposure to online hate material and social trust among Finnish youth. Information Technology & People, 28(3), 607-622.
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin, UK.
United States Department of State (2015). Countering violent extremism.