Wednesday, April 16, 2025

Microsoft Names and Shames Hackers Behind AI Exploits in LLMjacking Crackdown

Microsoft is escalating its fight against cybercriminals exploiting generative AI platforms by publicly naming individuals allegedly involved in LLMjacking schemes. The company has taken legal action against a group accused of selling unauthorized access to Azure AI services, exposing their operation and the methods they used to bypass security controls.

The Cybercriminals Behind LLMjacking

In an unusual move, Microsoft has identified four individuals linked to the illicit sale of AI-powered services:

  • Arian Yadegarnia (Iran)
  • Alan Krysiak (UK)
  • Ricky Yuen (Hong Kong)
  • Phát Phùng Tấn (Vietnam)

These individuals were allegedly facilitating access to Microsoft’s AI tools, providing users with instructions on bypassing restrictions to generate inappropriate images. According to Microsoft, this activity directly violated the terms of service for Azure AI and required deliberate manipulation to sidestep the built-in safeguards.

“This activity is prohibited under the terms of use for our generative AI services and required deliberate efforts to bypass our safeguards,” Steven Masada, assistant general counsel at Microsoft’s digital crimes unit, stated.

In response, Microsoft has filed legal action against the accused, seizing a website connected to the operation. But the cybercriminals didn’t take it lightly. Shortly after the lawsuit was filed, Microsoft attorneys were doxed, with their personal details leaked online as apparent retaliation.

Cybersecurity threat actors AI hacking

What is LLMjacking?

LLMjacking follows a pattern similar to proxyjacking and cryptojacking, where bad actors hijack computational resources for unauthorized purposes. However, instead of mining cryptocurrency or routing internet traffic through stolen proxies, LLMjacking involves hijacking AI services to generate content that violates platform policies.

Here’s how it typically works:

  1. Attackers scrape exposed API keys and credentials from public sources.
  2. They sell access to compromised AI services.
  3. Customers use these illicit services to generate banned content—from deepfake images to AI-powered scams.

Storm-2139, the group Microsoft has been tracking, exploited exposed Azure API keys to enable unrestricted AI use. But this isn’t limited to Microsoft. Other AI platforms, including OpenAI, Anthropic, and DeepSeek, have also been targeted in LLMjacking attacks.

How AI Models Are Being Exploited

Microsoft’s investigation found that Storm-2139 operated through a three-tiered system:

  1. Creators – Develop tools to manipulate AI models.
  2. Providers – Modify and distribute these tools, offering tiered services.
  3. Customers – Use these tools for illegal activities.

These attacks aren’t just about unauthorized AI use. Cybercriminals actively manipulate AI models, tricking them into generating harmful content.

“Attackers not only resold unauthorized access but actively manipulated AI models to generate harmful content, bypassing built-in safety mechanisms,” said Patrick Tiquet, vice president of security and architecture at Keeper Security.

The Impact of AI Hijacking

Selling unauthorized access to AI services opens the door to a range of scams and harmful activities. Some of the known abuses include:

  • AI Girlfriend Bots – Erotic AI chats that violate platform policies.
  • Deepfake Content – Fake images of public figures.
  • Phishing Scams – AI-powered attacks impersonating individuals or organizations.

The ripple effect of these scams goes beyond just the direct victims. Once stolen credentials hit underground marketplaces, there’s no telling who might use them next.

“The most concerning aspect is that once credentials are sold on illicit marketplaces, there’s no predicting what damage will follow,” said J. Stephen Kowski, field CTO at SlashNext.

Protecting AI Models from Cyber Threats

The rise of LLMjacking makes it clear: AI security needs to be tightened before cybercriminals gain even more control. Experts suggest several key measures:

  • Restrict access to AI models on a need-to-use basis.
  • Strengthen authentication for API keys and credentials.
  • Store API keys securely in digital vaults.

“As organizations adopt AI tools, they also expand their attack surface with applications holding sensitive data,” said Rom Carmel, CEO of Apono. “To securely leverage AI and the cloud, access to sensitive systems should be restricted on a need-to-use basis, minimizing opportunities for malicious actors.”

Microsoft’s aggressive legal push against LLMjacking marks a significant step in curbing AI abuse. Whether public exposure and legal pressure will be enough to stop these cybercriminals remains to be seen.

Davis Emily
Davis Emily
Emily is a versatile and passionate content writer with a talent for storytelling and audience engagement. With a degree in English and expertise in SEO, she has crafted compelling content for various industries, including business, technology, healthcare, and lifestyle, always capturing her unique voice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Recent

More like this
Related

Underground Rave Scene in China: A Pulse of Rebellion and Freedom

Crouching through a small metal door into a dark...

Luke Humphries Weight Loss Journey with Before & After Image

Luke Humphries, the British professional darts player, has recently...

How to Check Your MTN Number: A Beginner’s Tutorial

Checking your MTN number can be a lifesaver when...

Is Pure Flix Free With Amazon Prime? All You Need to Know

Pure Flix is a popular faith-based streaming service offering...