AI models take off, leaving security behind

As companies rush to develop and test artificial intelligence and machine learning (AI/ML) models in their products and daily operations, model security is often an afterthought, leaving companies at risk of falling prey to backdoor models and hijacked.

According to one survey, companies with their own ML team have more than 1,600 models in production, and 61% of companies acknowledge that they do not have good visibility into all their ML assets. survey data published by HiddenLayer, an AI/ML security company. The result: attackers identified the patterns as a potential vector for compromising businesses a recent exploration by software security company JFrog in models published in the Hugging Face repository by finding malicious files that create a backdoor on the victim’s computer.

Companies must consider the security of AI/ML models and their MLOps pipeline as they rush to develop AI-enabled features, says Eoin Wickens, director of technical research at HiddenLayer.

“With the democratization of AI and the ease with which pre-trained models can be downloaded from model repositories these days, you can get a model, refine it for purpose, and put it into production more easily than ever before,” he says. “It remains an open question as to how we can ensure the safety and security of these models once they are deployed.”

The pace of AI adoption worries security experts. At a conference at Black Hat Asia in April, two Dropbox security researchers will present their investigation into how malicious patterns can attack the environments in which they run. The research identified patterns of model hijacking, where the execution of the model allows embedded malware to compromise the host environment, and backdooring, where the model was modified to influence its behavior and produce certain results.

Without efforts to verify the security and integrity of models, attackers could easily find ways to execute code or alter the resulting output, says Adrian Wood, a security engineer on Dropbox’s red team and co-host of Black Hat Asia .

Data scientists and AI developers are “using models from repositories created by all kinds of people and all kinds of organizations, and they’re capturing and running those models,” he says. “The problem is they’re just programs and any program can contain anything, so when they run it it can cause all sorts of problems.”

The fog of artificial intelligence models

The estimate of more than 1,600 AI models in production may seem high, but companies with teams focused on data science, machine learning or data-centric AI have many models in production, says Tom Bonner, vice president of research at HiddenLayer. More than a year ago, when the company’s red team conducted a preliminary evaluation of a financial services organization, they expected only a handful of machine learning and artificial intelligence models to be in production. The real number? More than 500, he says.

“We’re starting to see that, in a lot of places, they’re training maybe small models for very specific tasks, but obviously that matters to the kind of overall AI ecosystem at the end of the day,” Bonner says. “So whether it’s finance, cybersecurity or payment processes [that they are applying AI to]we’re starting to see a huge increase in the number of models that people are training internally.”

Companies’ lack of visibility into which models have been downloaded by data scientists and ML application developers means they no longer have control over their AI attack surface.

Pickle, Keras: Easy to insert malware

Models are often created using frameworks, which save all model data in file formats capable of running code on an unwary data scientist’s machine. Popular frameworks include TensorFlow, PyTorch, Scikit-Learn, and to a lesser extent Keras, which is built on top of TensorFlow. In the rush to adopt generative AI, many companies are also downloading pre-trained models from sites like Hugging face, Tensorflow Hub, PyTorch HubAND Model zoo.

Typically, models are saved as Pickle files by Scikit-Learn (.pkl) and PyTorch (.pt), and as Hierarchical Data Format version 5 (HDF5) files often used by Keras and TensorFlow. Unfortunately, these file formats may contain executable code and often have insecure serialization functions that are prone to vulnerabilities. In either case, an attacker could attack the machines on which the model runs, says Diana Kelley, chief cybersecurity officer at Protect AI, an AI application security company.

“Because of the way models work, they tend to run with very high privileges within an organization, so they have a lot of access to things because they have to touch or get input from data sources,” he says. “So if you can put something malicious into a model, then that would be a very viable attack.”

Hugging Face, for example, now boasts more than 540,000 patterns, up from fewer than 100,000 at the end of 2022. Protect AI scanned Hugging Face and found 3,354 unsafe patterns, with approximately 1,350 missed by Hugging Face’s scanner. the company said in January.

Companies need to be able to trust training data

To ensure the development and deployment of AI models, organizations should integrate security throughout the ML pipeline, a concept often referred to as MLSecOps, experts say.

That visibility should start with the training data used to build models. Ensuring that models are trained on safe, high-quality data that cannot be modified by a malicious source, for example, is critical to the ability to trust the final AI/ML system. In a paper published last year, a team of researchers, including Google DeepMind engineer Nicholas Carlini, found that attackers could easily poisoned training AI models by purchasing domains known to be included in datasets.

The team responsible for securing the ML pipeline should know each data source used to create a specific model, says Hidden Layer’s Wickens.

“You need to understand the lifecycle of ML operations, from the process of data collection and curation to feature engineering to model creation and deployment,” he says. “The data you use may be fallible.”

Safety scoring models

Companies can start by examining metrics that can suggest the underlying security of the model. Similar to the world of open source software, where companies are increasingly using tools that use different open source project attributes to create a security report card, the information available about a model can hint at its underlying security.

Relying on downloaded models can be difficult as many are built by ML researchers who may have little in the way of track record. HiddenLayer’s ModelScanner, for example, analyzes models from public repositories and scans them for malicious code. Automated tools, like Protect AI’s Radar, produce a list of bills of materials used in an AI pipeline and then determine whether any of the sources pose a risk.

Companies need to quickly deploy an ecosystem of security tools around ML components in much the same way that open source projects have built security features for that ecosystem, says Protect AI’s Kelley.

“All the lessons we’ve learned about protecting open source and using open source responsibly and safely will be very valuable as the entire technical planet continues the journey to adopting AI and ML,” he says.

Overall, companies should start gaining more visibility into their pipeline. Without this knowledge, it’s difficult to prevent pattern-based attacks, Kelley says.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *