February 25, 2024



Protect AI has announced Guardian, a tool that allows organizations to enforce security policies on machine learning models to prevent malicious code from entering their environment. It is based on Protect AI’s open-source tool ModelScan, which scans ML models to determine if they contain unsafe code, and extends coverage with proprietary scanning capabilities. The democratization of AI/ML through open-source ‘Foundational Models’ on platforms like Hugging Face has led to an increased security risk, as the open exchange of files can lead to the spread of malicious software among users. Guardian aims to address this by enabling enterprise-level enforcement and management of model security.

CEO Ian Swanson emphasized the need for organizations to scan ML models for viruses and malicious code, as they are new assets in an organization’s infrastructure. He highlighted the risks of Model Serialization attacks, where malware code can be added to model contents during serialization and before distribution. Protect AI’s Guardian acts as a secure gateway and uses proprietary vulnerability scanners to proactively scan open-source models for malicious code, ensuring the use of secure, policy-compliant models in organizational networks.

The tool integrates with existing security frameworks and complements Protect AI’s Radar for extensive AI/ML threat surface visibility in organizations. Last year, Protect AI launched ModelScan to scan AI/ML models for potential attacks and has since used it to evaluate over 400,000 models hosted on Hugging Face, identifying unsafe models. Over 3300 models were found to have the ability to execute rogue code, highlighting the need for tools like Guardian to secure ML environments.



Source link

About YOU:

Your Operating System: Unknown OS

Your IP Address: 44.192.15.251

Your Browser: N/A

Want your privacy back? Try NordVPN

About Author