Malicious Machine Learning Models Discovered on Hugging Face: Report

Malicious Machine Learning Models Discovered on Hugging Face: Report



Hugging face, Artificial Intelligence (AI) and machine learning (ML) hub, are asked to include malicious ML models. A cyber security research firm discovered two models that include codes that can be used to package and distribute the malware to those downloading these files. According to the researchers, the danger actor is using a hard-to-address method, dubbed pickle file ordering to insert malicious software. Researchers claimed that malicious ML models have been reported, and the embrace face has removed them from the stage.

Researchers discovered malicious ML models in hugging face

Reversinglabs, a cyber security research firm, Discovered The malicious ML model and new exploitation are being expanded, which are being used by danger actors to embrace the face. In particular, a large number of developers and companies host an open-source AI model on the platform that can be downloaded and used by others.

The firm found that the modus operandi of exploitation included using the pickle file ordering. For unknown, the ML model is stored in a variety of data serialization formats, which can be shared and reused. Pickle is a paython module used by ML model data for serial and disorganization. It is generally considered an unsafe data format because the python code can be executed during the disorganization process.

In closed platforms, pickle files have access to limited data that comes from reliable sources. However, since the hug face is an open-source platform, the use of these files allows the attackers to misuse the system to hide the malware payload.

During the investigation, the firm found two models on the hugging face, with malicious code. However, these ML models were asked to avoid the safety measures of the platform and were not marked as unsafe. Researchers named the technique of inserting malware “Nulifai”, “This involves the involvement of the current security in the AI ​​community for the ML model.”

These models were stored in the pytorch format, which is essentially a compressed pickle file. Researchers found that the model was compressed using the 7Z format which prevents them from loading using the “Torch.load ()” function of pytorch. This compression also prevented the face pickle equipment from embracing the face pickle.

Researchers claimed that this exploitation could be dangerous as the developers downloading these models would inadvertently abolish inadvertently installing malware on their equipment. The cyber security firm reported the issue to the Hugging Face Security Team on 20 January and claimed that the model was removed in less than 24 hours. Additionally, the platform is said to have changed the pickle tool to better identify such dangers in “broken” pickle files.