Saturday, October 11, 2025

The Growing Threat of Malicious Code in Open Source AI Models

Companies building AI tools with open-source models are confronting a major security crisis. Cybercriminals are hiding malicious code within AI models on popular platforms like Hugging Face, and these threats are slipping past standard security checks. Experts are now warning that businesses cannot solely rely on platform-level security and must implement their own robust scanning and verification processes to protect themselves from these hidden dangers.

Malicious Models are Bypassing Security Scans

In a concerning development, security researchers have found that AI models containing harmful code are being successfully uploaded to public repositories and marked as safe. Attackers have developed sophisticated methods to evade automated security tools, creating a false sense of security for developers and businesses who download these files.

The security firm ReversingLabs recently uncovered two AI models on Hugging Face that contained malicious payloads. According to Tomislav Pericin, the firm’s chief software architect, the attackers used a technique called “NullifAI” to hide the harmful code within Pickle format files. The malicious models passed all automated checks, appearing safe to users who downloaded them.

This highlights a fundamental flaw in the open-source ecosystem. Anyone can upload a model, and bad actors are actively exploiting this freedom. Pericin warns that similar tactics could easily be used on other major platforms, including TensorFlow Hub and PyTorch Hub, putting countless organizations at risk.

The Persistent Threat of Insecure Pickle Files

The use of Pickle files in AI development remains a significant and well-known vulnerability. For years, cybersecurity experts have cautioned against this data format because it can execute arbitrary code when loaded, making it an ideal vehicle for delivering malware.

Tom Bonner, VP of research at the AI security firm HiddenLayer, confirms that this is not just a theoretical risk. “Organizations are getting compromised through machine learning models. It’s not as common as ransomware, but it does happen,” he stated, expressing frustration that the format is still in wide use despite the known dangers.

While platforms like Hugging Face have tried to address the issue with tools like PickleScan, attackers have already found ways to bypass these defenses. The application security company Checkmarx discovered multiple methods to get around the scans. A much safer alternative, Safetensors, has been developed, but its adoption is not yet universal.

Beyond Code: The Hidden Legal and Ethical Dangers

The risks associated with open-source AI models extend beyond malicious code. Companies also face a confusing maze of legal and ethical challenges, particularly concerning licensing and model behavior.

Andrew Stiefel, a senior product manager at Endor Labs, points out that AI licensing is incredibly complex. An AI model consists of several parts, including the model architecture, the training data, and the model weights, each of which can have a different license. Many companies mistakenly assume all open-source models are free for commercial use, which can lead to serious legal consequences.

Furthermore, model alignment, which is the process of ensuring an AI behaves as intended, is another major concern. Some models have been found to generate harmful or biased content. In one case, the DeepSeek model was manipulated to create malware. Even highly secured models like OpenAI’s o3-mini were quickly “jailbroken” by researchers, proving that AI behavior can be unpredictable and a potential security liability.

How Businesses Can Mitigate AI Model Risks

Given the wide range of threats, experts urge businesses to treat open-source AI models with the same caution as any other third-party software dependency. A proactive and skeptical approach is necessary to avoid falling victim to these emerging cyber threats.

Instead of blindly trusting models, companies should implement a multi-layered defense strategy before integrating any open-source AI into their systems. Security professionals recommend a thorough vetting process that includes several key steps.

Endor Labs’ Stiefel suggests a straightforward checklist for any organization using open-source models:

  • Check the source: Investigate who created the model. Is it from a reputable research institution, a well-known company, or an anonymous user? Trustworthy sources are always a safer bet.
  • Monitor development activity: Look at the model’s history. How often is it updated? Are there active discussions about security issues, and are they being addressed promptly by the maintainers?
  • Scan for vulnerabilities: Use dedicated security tools to scan every model for hidden risks, malicious code, and insecure file formats like Pickle before it ever touches your production environment.

By adopting these practices, companies can harness the power of open-source AI innovation while significantly reducing their exposure to the growing security and legal risks.

Davis Emily
Davis Emily
Emily is a versatile and passionate content writer with a talent for storytelling and audience engagement. With a degree in English and expertise in SEO, she has crafted compelling content for various industries, including business, technology, healthcare, and lifestyle, always capturing her unique voice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Recent

More like this
Related

How to Get the Senior Discount for Amazon Prime Membership

Amazon Prime offers incredible convenience with its free shipping,...

How to Become an Amazon Delivery Driver: a Complete Guide

You can become an Amazon delivery driver by meeting...

China’s Underground Raves: a Secret Space for Youth Freedom

In the city of Changchun, China, a different kind...

How to Complain About an Amazon Driver for a Quick Resolution

When your Amazon package arrives late, damaged, or is...