Cybercriminals are exploiting stolen cloud credentials to hijack powerful AI models in a trend called “LLMjacking.” The latest target is DeepSeek, whose newly released models were compromised within days, and in one case, within 24 hours. This high-speed theft is used for everything from generating banned content to bypassing national restrictions, leaving unsuspecting victims with bills soaring by as much as 40,000%. The attacks highlight a growing underground economy for stolen AI access.
How Cybercriminals are Stealing AI Access
Running powerful AI models is expensive, often costing organizations hundreds of thousands of dollars a year. Instead of paying, attackers have found a simpler way: they steal access. The process begins with obtaining stolen cloud service logins or API keys linked to AI applications.
Once they have the credentials, they use special tools called “OAI” reverse proxies (ORPs). These proxies act as a secret layer between the cybercriminal and the AI model. This setup hides the attacker’s identity and makes it very difficult to track the illicit activity.
These criminal tools are becoming more advanced, often including password protection and techniques to hide their servers. New online communities on platforms like 4chan and Discord are dedicated to sharing these tools and stolen keys.
DeepSeek Models Hijacked at Alarming Speed
The cybersecurity firm Sysdig recently revealed just how fast these LLMjacking operations move. Attackers are actively monitoring for new AI model releases to exploit them immediately.
The compromise timeline for DeepSeek’s latest models shows an organized and rapid response from cybercriminals.
AI Model | Release Date | Time to Compromise |
---|---|---|
DeepSeek-V3 | December 26, 2023 | Within days |
DeepSeek-R1 | January 20, 2024 | Within 24 hours |
“This isn’t just a fad anymore,” warned Crystal Morin, a cybersecurity strategist at Sysdig. “This is far beyond where it was when we first discovered it last May.” The speed of these attacks proves that LLMjacking is now a widespread and systematic threat.
The Staggering Financial Cost for Victims
When AI access is stolen, the original account holder is left to pay the bill. Attackers try to avoid immediate detection by spreading their usage across many stolen accounts. One reverse proxy discovered by Sysdig was using 55 different stolen DeepSeek API keys.
But even with these tactics, the costs can explode without warning. In one recent case, an AWS user whose monthly bill was typically just $2 saw it jump to $730 in a matter of hours. By the time he managed to shut down his account, he was facing potential charges of over $20,000.
For large companies with massive cloud budgets, a quiet LLMjacking attack could go unnoticed for much longer, potentially leading to losses in the millions of dollars before it’s caught.
How to Prevent LLMjacking Attacks
Both individuals and businesses must take proactive steps to secure their accounts against this growing threat. Waiting until you receive a massive bill is too late. Experts recommend a multi-layered security approach to protect your credentials.
Here are some of the most effective prevention methods:
- Enable Cost Alerts: Set up real-time notifications for your cloud spending. This is often the first and fastest way to catch an unusual spike in activity.
- Use Multi-Factor Authentication (MFA): Securing your accounts with MFA makes it much harder for attackers to log in, even if they have your password.
- Rotate API Keys Regularly: Don’t use the same API key forever. Changing your keys frequently limits the window of opportunity for criminals if a key is stolen.
- Monitor Usage Patterns: Implement tools that can detect strange activity on your account, which can help identify a compromise before costs get out of control.
As cybercriminals refine their methods, staying vigilant with security best practices is the only way to avoid becoming the next LLMjacking victim.