In an effort to combat cybercriminals abusing generative AI services, Microsoft has filed a lawsuit targeting a network of hackers identified as Storm-2139. The lawsuit, filed in December 2024, names four individuals from Iran, the UK, China, and Vietnam, who are allegedly involved in creating and distributing tools that bypass security measures in AI services like Microsoft’s Azure OpenAI Service.
The lawsuit accuses the hackers of using exposed credentials to access AI platforms, then modifying and selling tools to generate harmful content such as deepfakes and illegal imagery, often centered around celebrities and sexual content. Microsoft’s investigation highlights three categories of actors within the operation: creators who develop the abusive tools, providers who modify and distribute them, and users who utilize these tools for malicious purposes.
The suspects named by Microsoft include:
- Arian Yadegarnia (Fiz) from Iran,
- Alan Krysiak (Drago) from the UK,
- Ricky Yuen (Cg-dot) from China (Hong Kong),
- Phat Phung Tan (Asakuri) from Vietnam.
Microsoft has identified over a dozen individuals involved in the network, with many located in countries such as Iran, Austria, Vietnam, and the US. The company has also confirmed that two U.S.-based suspects are under investigation but have not been publicly named.
Microsoft has already managed to shut down a website used by the hackers, and is working with law enforcement agencies globally. The company has also faced harassment from the hackers, who attempted to expose the personal details of lawyers handling the case. In a blog post, Steven Masada, assistant general counsel at Microsoft’s Digital Crimes Unit, confirmed that criminal referrals are being prepared for U.S. and international authorities.
This lawsuit is part of a broader effort to prevent the misuse of AI technologies and protect against the growing threats posed by malicious cyber actors targeting digital platforms.