Recent analysis from JFrog has underscored significant security vulnerabilities within popular machine learning (ML) frameworks, revealing that ML software is more susceptible to threats than older, more established technologies such as DevOps or web servers. This evaluation is timely as the utilisation of machine learning grows across various sectors, emphasising the necessity for effective security measures to prevent potential data breaches and operational disruptions.

The report identifies MLflow as particularly vulnerable, with JFrog noting that it, along with 14 other open-source ML projects, has experienced a rise in critical vulnerabilities. In total, 22 vulnerabilities have been recorded in these projects, drawing attention to severe risks associated with server-side components and the potential for privilege escalation within these frameworks.

One notable vulnerability highlighted involves Weave, a toolkit developed by Weights & Biases (W&B), which is widely used for tracking and visualising ML model metrics. The WANDB Weave Directory Traversal vulnerability (CVE-2024-7340) permits low-privileged users to gain access to arbitrary files across the filesystem due to inadequate input validation. Attackers can exploit this flaw to uncover sensitive information, including admin API keys, leading potentially to unauthorized privilege escalation.

ZenML, a management tool for MLOps pipelines, also exhibits critical access control vulnerabilities. These flaws allow attackers with limited access to elevate permissions within ZenML Cloud, the managed version of ZenML, enabling them to gain entry to restricted information, including confidential secrets or model files. This escalated access could lead to significant disruptions by allowing malicious actors to alter ML pipelines or tamper with essential data.

Additionally, a serious vulnerability in the Deep Lake database (CVE-2024-6507) has been identified. This data storage solution, designed for AI applications, has issues with command sanitisation when importing external datasets. An attacker could exploit this to execute arbitrary commands, which may jeopardise both the database and any associated applications.

Another instance of concern is within Vanna AI, a tool that generates natural language SQL queries. The Vanna.AI Prompt Injection vulnerability (CVE-2024-5565) enables attackers to inject malicious code into SQL prompts, which the system then processes. Such an attack can lead to remote code execution, putting the integrity of visualisations at risk and potentially facilitating SQL injections or data exfiltration.

Mage.AI, another MLOps tool, is reported to have various vulnerabilities including unauthorised shell access and weak path traversal checks, posing a risk of data pipeline control loss, file leaks, and the execution of malicious commands. The variety of vulnerabilities discovered within Mage.AI represents a significant threat to data integrity and security across ML operations.

The findings from JFrog draw attention to the operational vulnerabilities present in MLOps security. It appears that many businesses have yet to effectively incorporate AI and ML security practices into their overall cybersecurity strategies, potentially exposing themselves to unrecognised risks. As AI and ML technologies continue to evolve and shape industries, securing the frameworks, datasets, and models that underpin these advancements is becoming increasingly critical.

Source: Noah Wire Services