WASHINGTON (AP) — As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren’t sure exactly what to do with it.
For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned.
Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. “One of the best things about AI is how easy it is to use,” the user wrote in English.
“Some intelligence agencies worry that AI will contribute (to) recruiting,” the user continued. “So make their nightmares into reality.”
IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it’s not surprising that the group is testing out AI, national security experts say.
For loose-knit, poorly resourced extremist groups — or even an individual bad actor with a web connection — AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence.
“For any adversary, AI really makes it much easier to do things,” said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”
How extremist groups are experimenting
Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video.
When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago.
Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war’s actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere.
Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits.
IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS’ evolving use of AI.
‘Aspirational’ — for now
Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as “aspirational,” according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government.
But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said.
Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks.
More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security’s updated Homeland Threat Assessment, released earlier this year.
“ISIS got on Twitter early and found ways to use social media to their advantage,” Fowler said. “They are always looking for the next thing to add to their arsenal.”
Countering a growing threat
Lawmakers have floated several proposals, saying there’s an urgent need to act.
Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies.
“It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors,” Warner said.
During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI.
Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year.
Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill’s sponsor.
“Our policies and capabilities must keep pace with the threats of tomorrow,” he said.
Copyright © 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.