Cybercriminals are ramping up sophisticated “LLMjacking” operations, exploiting stolen credentials to access and misuse powerful AI models. The latest victim? DeepSeek, whose models were hijacked within days of release. The trend signals a growing underground economy where stolen AI access fuels everything from NSFW content to bypassing national bans.
How LLMjacking Works: AI Theft on the Rise
Running large language models (LLMs) isn’t cheap. AI services like OpenAI’s GPT-4 can cost users hundreds of thousands of dollars annually if used at full capacity. Cybercriminals have figured out a way around these costs—by stealing access.
It all starts with stolen credentials. Attackers obtain cloud service account logins or API keys associated with AI applications. These credentials are then tested through scripts to verify access to high-end models.
Once access is confirmed, cybercriminals use “OAI” reverse proxies (ORPs) to mask their activities. ORPs act as a middle layer between the stolen credentials and the AI models, hiding user identities and making detection difficult. These proxies are constantly evolving, incorporating:
- Password protections
- Obfuscation techniques (such as requiring users to disable CSS for visibility)
- Prompt logging elimination
- Cloudflare tunnels to conceal virtual private servers (VPS) and IP addresses
New communities on 4chan and Discord are thriving around these tools, with users exploiting stolen AI access for generating banned content, scripting, and even routine tasks like essay writing. Some users in countries like Russia, Iran, and China are leveraging ORPs to bypass restrictions on services like ChatGPT.
DeepSeek Models Targeted Just Days After Release
LLMjacking isn’t just an isolated scam—it’s moving at a rapid pace. The cybersecurity firm Sysdig recently discovered that DeepSeek models were compromised within days of their official launches.
- DeepSeek-V3 (Released: Dec. 26, 2023) → Stolen within days
- DeepSeek-R1 (Released: Jan. 20, 2024) → Stolen within 24 hours
This speed of compromise shows how cybercriminals are closely monitoring new AI model releases, waiting for opportunities to exploit them.
“This isn’t just a fad anymore,” says Crystal Morin, cybersecurity strategist at Sysdig. “This is far beyond where it was when we first discovered it last May.”
The High Price of Stolen AI Access
Somebody always foots the bill for these stolen AI sessions. Attackers don’t want their victims to notice suspicious spikes in usage too quickly, so ORPs distribute the stolen credentials across multiple accounts.
Sysdig found one ORP that integrated 55 different DeepSeek API keys, along with stolen access to other AI platforms. This load balancing strategy helps avoid detection while ensuring uninterrupted model access.
Still, the system isn’t foolproof.
A recent case involved an unsuspecting AWS user whose cloud credentials were compromised. His typical monthly AWS bill of $2 skyrocketed to $730 within hours—a staggering increase of 40,000%.
“He woke up one morning and saw his bill had exploded in just a few hours,” Morin explains. “By the time he shut everything down, he was looking at potential charges exceeding $20,000.”
AWS ultimately refunded the victim, but that’s not always the case. Many enterprises could suffer devastating financial losses before they even realize what’s happening.
Enterprises Face Growing LLMjacking Risks
For businesses and organizations, the consequences of LLMjacking are even more severe. While a single individual might see a sudden bill spike, corporations with extensive cloud usage might not notice the fraud immediately.
Potential risks include:
- Massive financial losses: Large-scale LLM usage could rack up six-figure costs if credentials are compromised.
- Regulatory repercussions: If stolen AI access is used for illegal activities, companies could face compliance violations.
- Reputation damage: A high-profile breach can erode customer trust and investor confidence.
Morin warns that an enterprise-scale attack could result in losses of millions of dollars before detection.
How to Prevent LLMjacking Attacks
Businesses and individuals must be proactive in securing their cloud and AI credentials. Some best practices include:
- Enable cost alerts: Many victims don’t realize they’ve been compromised until it’s too late. Setting real-time spending alerts can help catch anomalies early.
- Use multi-factor authentication (MFA): Securing accounts with MFA adds an extra layer of protection against unauthorized access.
- Rotate API keys frequently: Changing keys regularly can minimize the risk of long-term exploitation.
- Monitor cloud usage patterns: Implementing AI-driven anomaly detection can help identify unusual activity before costs spiral out of control.
As LLMjacking evolves, security measures must evolve with it. Cybercriminals are refining their tactics, making stolen AI access harder to detect. With growing financial and regulatory stakes, organizations must take this threat seriously—or risk becoming the next victim.