(This article is the continuation of Part I, which examined the profile of Tyler Robinson and his trajectory of online radicalisation. In this second part, the focus shifts towards conceptual frameworks, policy implications, and the potential of Artificial Intelligence in anticipating similar threats.)
In Robinson’s specific case, evidence suggests he absorbed a sort of hybrid or “customised” ideology, pieced together from diverse extremist currents encountered online. Despite his conservative family background, Robinson reportedly harboured hostility towards alt-right figures (including Kirk himself) and rhetorically sympathised with opposing stances, for instance adopting tropes typical of militant antifascism in identifying Kirk as a “fascist” to be eliminated. This apparent contradiction – a son of a pro-Trump household assassinating a pro-Trump activist – can be explained by the subcultural milieu within which his distorted political consciousness was forged. Online, particularly in radical enclaves, one often encounters syncretic narratives in which elements of the far-right and far-left paradoxically combine, united less by doctrine than by shared hostility towards the mainstream and a will to destabilise the status quo. Analysts and sociologists have begun to describe these phenomena with concepts such as “composite extremism” or “post-ideological extremism.” In the United Kingdom, for instance, security authorities have introduced the category of Mixed, Unclear, Unstable (MUU) ideologies to classify radicalisation cases where individuals blend disparate ideological references without clear allegiance. Robinson appears to belong precisely to this nebulous category of new-generation radicalised actors, difficult to classify as straightforwardly “far-right” or “far-left” terrorists. Here, digital culture plays a crucial role: extremist forums and imageboards act as hubs where such “mixed ideologies” emerge, through shared cultural references (often ironic or meme-based) that create a collective identity even without a structured political programme.
Robinson’s self-radicalisation thus occurred in a vacuum of institutional and social oversight, filled by the dark web and underground online platforms. Deprived of effective real-world counterbalances – such as parental supervision or satisfying social integration – the young man found in the virtual realm both the source of his grievances (a belief in waging an epochal struggle against malign forces, personified in this case by the public figure Charlie Kirk) and the moral justification for his fantasies of violence. On the dark web and extremist forums, acts like political assassination are reinterpreted and perversely legitimised: murder becomes either an “act of justice” or even a subversive role-playing game. Some ultra-radical trolling communities openly celebrate such actions as the ultimate expression of real-life trolling – a way of “mocking the system” by sowing chaos and proving one’s significance. What emerges is a violent subculture, fuelled as much by ideology as by nihilism and the pursuit of ephemeral notoriety. Fully grasping how the dark web catalyses violence requires recognising that incitement there often occurs implicitly yet effectively: every user who posts violent manifestos, extreme videos, or simple hate messages helps to push further the boundaries of what is acceptable to discuss and plan. In the absence of external restraints, young people like Robinson come to perceive violence not only as a viable option, but as a pathway to self-actualisation and membership in a countercultural elite. The dark web and other radicalised corners of the internet thus operate as accelerators of radicalisation processes: shortening timelines and lowering moral barriers that separate online hate expression from the commission of extreme acts in the physical world.
Conclusions
The assassination of Charlie Kirk at the hands of Tyler Robinson stands as a paradigmatic case study for understanding the transformations of political extremism in the contemporary era. It highlights how political violence today can emerge from unprecedented matrices, distinct both from the organised terrorism of the past and from so-called lone wolves driven by clearly defined ideologies. Here we are faced with a post-modern radical actor, shaped at the crossroads of real socio-political polarisation and a digital humus of global subcultures. His ideology is fluid, ironic, infused with pop references yet intrinsically violent; his motive simultaneously personal (resentment, need for affirmation) and political (aversion to a symbol of the opposing camp), but not easily contained within the binary categories of left versus right. To interpret phenomena of this kind, traditional political sociology must engage with digital culture theory and the study of contemporary extremism: concepts of identity, ideology, and mobilisation must be reformulated in light of the impact of the internet and the memetic imaginary on consciousness. An emergent interpretative key – not yet fully recognised in public debate as of 16 September 2025 – is that of nihilistic-memetic extremism: a form of radicalisation in which ideological adherence is instrumental and cynical, and the violent act assumes pseudo-ludic and performative connotations. Within this theoretical framework, Kirk’s murder may be read not merely as “leftist political violence” (as some hastily labelled it) but as the outcome of an antagonistic subculture blending political dissent with a drive for destructive trolling. Developing such new conceptual categories – for instance exploring the nexus between gamification and terrorism, or between online jargon and enemy-construction – will be essential for academic research and for comprehending these emergent threats.
Alongside theoretical innovation, lessons from the Robinson case must be translated into practical action. The following policy recommendations are outlined as priorities to prevent and effectively counteract similar phenomena in the future:
- Enhanced monitoring of online platforms: Authorities should intensify surveillance of extremist activities on social media, forums, and emerging platforms, including those less visible to the general public. This requires closer collaboration with technology companies (to secure access to data and the timely removal of illicit content) and investment in algorithmic systems capable of automatically identifying hate speech, incitement, and radicalisation signals. Monitoring must extend beyond the surface web to the dark web: specialised OSINT and cyber-intelligence task forces should be established to penetrate anonymous forums and gather actionable information, within the limits of current legislation.
- Digital education and resilience: Preventing youth vulnerability to extremist propaganda requires critical digital literacy education. Schools, universities, and civic organisations should implement training programmes teaching how to recognise manipulation techniques, algorithmic functioning, and the deceptive nature of many online radical narratives. Simultaneously, promoting digital wellbeing – for example encouraging offline activities, positive socialisation, and psychological support for those showing pathological isolation online – can mitigate risk factors (loneliness, alienation, substitute virtual identity) often preceding self-radicalisation. Families too should be engaged: awareness campaigns can help them detect early warning signs (extreme language, sudden secrecy about online activity, rupture of traditional social ties) and intervene through dialogue or professional assistance.
- Specialist units and interdisciplinary research: Law enforcement and security agencies should establish specialised units dedicated to online extremism and non-conventional threats. These units should integrate computing expertise (to navigate the dark web, decipher coded language, analyse big data from digital activities) with socio-psychological skills (behavioural profiling, analysis of ideological narratives, subcultural literacy). In parallel, interdisciplinary research centres bringing together social scientists, data scientists, terrorism scholars, and digital specialists should be funded to develop early-identification methodologies for radicalisation processes and to evaluate the effectiveness of deradicalisation interventions.
- International cooperation and legal frameworks: The transnational nature of online extremist communities necessitates strong international cooperation. Governments should harmonise legislation on combating online extremism (e.g. sharing definitions and standards of hate speech or online terrorism), exchange intelligence on emerging trends, and coordinate operations against dark web platforms functioning as hubs of violent propaganda. At the supranational level, legal frameworks should be updated to address gaps: for example, clarifying liability for those managing anonymous channels inciting violence, or facilitating extradition of individuals engaged in global extremist networks. Public-private partnerships with cybersecurity firms could also be pursued to block or disrupt dark web services notoriously exploited by violent groups, while maintaining a careful balance with the protection of digital rights.
A particularly relevant and rapidly evolving area is the application of Artificial Intelligence (AI) technologies to the prevention and early diagnosis of violent radicalisation, especially in the web’s recesses. Recent developments in predictive analytics, machine learning, and Natural Language Processing (NLP) offer unprecedented tools to sift through vast amounts of textual and behavioural data in search of recurring patterns linked to extremism. One can envisage, in the near future, algorithms trained to monitor continuously conversation flows on platforms such as anonymous forums, chans, encrypted messaging services, or decentralised social networks, scanning for warning signs. For example, AI could analyse the semantic networks of a user’s posts over time: the recurrent emergence of violence-related terms, the use of extremist jargon borrowed from past terrorist manifestos, or the escalation of references to imagined enemies or conspiracy theories are all elements that an intelligent system could identify and correlate more rapidly than a human analyst. Such systems – still experimental but rapidly developing – could enable mapping of ideological communities on the dark web, identifying influential nodes (e.g. users acting as “preachers” or instigators of violence) and tracing connections between groups previously thought separate. Notably, academic research is already exploring innovative methods such as human-machine hybrid learning to detect and monitor terrorist influencers in darknet forums: one recent study demonstrated that combining classification algorithms with human expertise can effectively identify jihadist propagandists hidden within dark web platforms. These same techniques, suitably adapted, could be applied against the new forms of nihilistic extremism of local or “ludic” character exemplified by the Robinson case.
The preventive potential of such AI systems is clear: in principle, an automated system could flag suspicious user profiles to authorities well before they act violently, offering the chance for timely intervention (whether investigative or psychological support). Yet a balanced perspective is crucial: the notion of an omniscient AI infallibly predicting terrorism – a kind of “Minority Report” applied to radicalisation – remains science fiction for now and raises serious ethical dilemmas. Even the most sophisticated predictive models risk false positives (flagging innocuous individuals engaged in heated political debate as potential terrorists) or conversely false negatives. Furthermore, implementing algorithmic surveillance on a broad scale poses profound issues for privacy and civil liberties: democratic states must balance security with fundamental rights, ensuring that AI use does not lead to unwarranted mass profiling or discrimination against specific groups. For this reason, experts recommend employing AI as an analytical support for investigators, not as a substitute for human judgement: algorithms can filter and prioritise information, but contextual interpretation and decisions on subsequent action must remain the responsibility of trained personnel, able to assess case by case.
In conclusion, the Robinson-Kirk case offers a stark warning: in the era of decentralised web architectures and global subcultures, extremist threats can surface in unexpected places and assume hybrid forms. Addressing them demands an equally multidisciplinary and innovative response. On one hand, we must advance theoretically by refining our understanding of these new forms of violence – moving beyond twentieth-century interpretive frameworks – and on the other hand, we must act pragmatically through updated public policies, advanced technological tools, and cooperation at both local and international levels. Only by integrating academic knowledge, technical expertise, and strategic foresight will it be possible to identify and defuse the next potential “Tyler Robinson” before he crosses from the virtual world into the headlines of reality.
References – Part II
Baele, S.J. (2025) ‘Critical reflexions on “Composite”, “Fused”, or “Mixed, Unclear and Unstable” extremist ideologies’. VOX-Pol Network. June 2025.
Lewinsky, D., et al. (2024) ‘Detecting terrorist influencers using reciprocal human–machine learning: The case of militant Jihadist Da’wa on the Darknet’. Humanities & Social Sciences Communications, 11(1442).
Weimann, G. (2016) Terrorism in Cyberspace: The Next Generation. New York: Columbia University Press.
Macdonald, S. and Whiting, A. (2021) ‘The role of the Internet in facilitating violent extremism and terrorism: Suggestions for progressing research’. International Journal of Conflict and Violence, 15(1), pp. 1–14.
Sayyed, H. (2025) ‘Exploring the role of encryption and the dark web in cybercrime’. Cogent Social Sciences, 11(1), Article 2479654.
U.S. Department of Homeland Security (2024) Homeland Threat Assessment 2025. Washington, D.C.: DHS, September 2024.
GNET Research (2020) Artificial Intelligence and Countering Violent Extremism: A Primer. Global Network on Extremism & Technology.

No responses yet