The adoption of technology by terrorists to advance their strategic objectives has remained a core counter-terrorism challenge for the community of states. In the contemporary geopolitical landscape, the diffusion of technology—particularly artificial intelligence (AI)—has the potential to reduce the capabilities gap between terrorist organisations and state-led security mechanisms. Terror groups are increasingly using AI to enhance the sophistication and intensity of attacks, disseminate extremist propaganda, and incite violence. These efforts, in turn, help the terrorist groups gain a competitive edge against innovative counter-terrorism measures led by states. Thus, both existing and potential use of AI open a new frontier of challenges for security-led mechanisms as well as the capacity of the international community of states.
The Strategic Integration of Technology by Terrorist Groups
The adoption of technology by terror groups has remained subject to two key factors: contextual elements—ideology, targets, supporters and intended audience—and the nature of technology itself. The integration of technology by terrorists is often manifested through innovation, either by introducing new methods or by significantly enhancing existing capabilities. Innovation, in general, and technological innovation, in particular, is aimed at securing an element of surprise. Terrorist groups undergo three categories of innovation: tactical, organisational, and strategic innovation. In the context of technological adoption and integration, it primarily deals with tactical innovation aimed at serving the pre-existing objectives. The tactical innovation is likely to support either incremental or radical terms, both organisational and strategic innovation. Tactical innovation caters to improved attack methods and modus operandi, while organisational innovation involves enhancing organisational structures and recruitment processes. Meanwhile, strategic innovation involves identifying new objectives, primarily those that were not previously feasible in the absence of technology.
During the 1990s, internet advancement was exploited by terror actors to coordinate activities across borders and digital communication platforms for audience manipulation. The 9/11 attacks showcased the widespread use of technology by terrorist groups, including encrypted emails, online flight simulation software, prepaid mobile phones, and internet-based coordination to plan and execute a synchronised attack. The proliferation of online forums and encrypted messaging apps, alongside propaganda dissemination and operational planning, has enhanced terrorist capabilities in multiple ways.
The 2005 London bombings were coordinated through commercial mobile networks and illustrated the growing role of internet-based radicalisation in terrorism. The c26/11 Mumbai terror attack in 2008 demonstrated the use of the global positioning system (GPS), mobile phones, and the internet, enabling real-time coordination by terrorists.
Social media platforms such as Facebook, Telegram, and X has become an instrument for terror organisations to spread extremist ideology, recruit followers, and raise funds. The Christchurch shooting terror incident in 2019 in New Zealand was livestreamed on Facebook to amplify the effect to create shock, panic, and chaos among a wider global audience. In the digital domain, cryptocurrencies and other virtual assets have been exploited by terrorist groups, due to their perceived anonymity and transnational character, and gaps in effective anti-money laundering and related counter-measures.
Emerging Uses of AI by Terrorist Organisations AI through computational machine learning and deep learning processes aims to enhance decision-making abilities. Generative AI, a specialised subset of artificial intelligence, can develop new outputs instead of performing prediction tasks alone. It can produce advanced new data, images, speech, and other forms of content in response to input prompts.
AI-Driven Manipulation in the Digital Domain
The information space, including cyberspace, has been labelled as a domain of strategy in itself. The malicious use of generative AI is notable, from interactive recruitment to developing propaganda and influencing people’s behaviour via social media channels. Major terrorist groups, including al-Qaeda, ISIS, and Hezbollah, have actively used generative AI for malicious purposes ranging from enhanced propaganda, including messaging, emotive engagement and targeting in the digital space. The use of memetic warfare and the translation of propaganda material into different languages for better outreach through generative AI has emerged as a means to support ideological extremism. Deepfakes are increasingly being employed by terror groups for content manipulation. In the age of information, both misinformation and disinformation campaigns using AI have become relatively easier. A related phenomenon, namely malinformation, which primarily stems from the exaggeration of truth to mislead the audience, could also pose serious challenges.

Unmanned Systems and the Rise of Autonomous Threats
Another instance that exemplifies technological adoption is the use of unmanned aerial systems (UAS) as low-cost precision weapons. The integration of AI in unmanned aerial vehicles (UAVs) has added another layer of autonomy and operational efficiency to contemporary warfare. Innovation in drone technology, combined with gaps in effective countermeasures, has made armed forces and critical infrastructure major targets for achieving strategic objectives. Unlike technologies from the past decades, the rise of AI and its adoption by terror groups aims to enhance their reach, anonymity, and lethality with a multiplier effect. Today, drones are the weapon of choice not just for tracking and targeting but also as tools of psychological fear to coerce state actors. Simultaneously, such attacks also result in physical damage and disruption, while imposing high reputational costs for state actors. Smaller drones are used for intelligence, surveillance, and electronic warfare, and assist in target acquisition to increase precision and lethality from ground-based systems. Since the first successful attack by ISIL in 2016, terror actors could orchestrate serious swarm attacks to create a force of autonomous AI-enabled, non-contact warfare.
Implications for State Security and Counter-Terrorism
The use of AI is likely to produce both incremental and innovative outcomes for terrorist groups. To this end, ideology, audience, modus operandi and objectives evolve toward their manifestation in real-time. AI, as a form of advanced technology, has become the source as well as a catalyst to inflict violence for terrorist organisations. Specifically, the use of Generative AI by terrorists is an emergent threat to counter-terrorism efforts by the community of states. However, the availability of resources and the degree of innovation remain critical for the success of AI adoption and integration against state authorities. From the states’ perspective, the nature of counter-terrorism capabilities and the rate of adaptation remain decisive. In this regard, four key challenges arise for counter-terrorism (CT) mechanisms and processes for the community of states.
First, AI-generated manipulated content and deepfakes challenge traditional intelligence strategies by creating operational barriers, as their identification remains difficult for state actors. Fake audio and videos during crises generate a liar’s dividend, whereby real evidence is dismissed as fabricated, eroding public trust and undermining intelligence credibility. Second, the manipulation of AI-generated content on social media and internet searches becomes a key challenge to counter mass radicalisation. In the absence of a robust legal framework, concerns related to the violation of individual privacy create constraints for state actors. Third, greater access to AI may also result in easier access to technological integration and dependence on AI systems by non-state actors. The integration of AI raises both the costs and stakes for the community of states. The likelihood of AI-enabled drone swarms targeting military systems, electricity and water supply network grid systems, transportation networks, and financial systems remains a likely possibility in future. Fourth, the growth in quality, scalability and diffusion of AI is a plausible challenge. The existing gaps in export controls—particularly in enforcement and regulatory mechanisms—persist due to the dual-use technological nature of AI and autonomous systems such as drones. This also creates gaps and vulnerabilities in supply chains that terrorist actors can exploit for capability acquisition, representing a continuing challenge for counter-terrorism efforts.
Conclusion
The increasing use of AI is opening a new frontier of challenges for state-led security mechanisms and processes in pursuing counter-terrorism end goals. Over time, the threat landscape is likely to expand in terms of both scale and complexity. The weaponisation of AI to achieve strategic objectives by terrorist groups poses a defender’s dilemma for the community of states. AI is likely to increasingly enable propaganda, recruitment, operational planning, and more sophisticated cyberattacks. State actors must adapt rapidly while containing these risks to the extent possible. To this end, a clear understanding of the emergent threat landscape is essential.
Rahul Rawat is a Research Assistant with the Strategic Studies Programme at Observer Research Foundation.
Neha Kaushal is a former intern with the Strategic Studies Programme at Observer Research Foundation.
This article was originally published on the Observer Research Foundation website.

