Researchers at North Carolina State University have unveiled a groundbreaking method for extracting artificial intelligence (AI) models by capturing electromagnetic signals emitted from computers, achieving an impressive accuracy rate of 99%. This discovery presents potential challenges for companies heavily invested in proprietary AI models, such as OpenAI, Anthropic, and Google.
Lars Nyman, chief marketing officer at CUDO Compute, articulated the implications of such advancements in a conversation with PYMNTS. He noted, “AI theft isn’t just about losing the model. It’s the potential cascading damage, i.e. competitors piggybacking off years of R&D, regulators investigating mishandling of sensitive IP, [and] lawsuits from clients who suddenly realize your AI ‘uniqueness’ isn’t so unique.” His comments suggest that this trend of AI theft protection may indeed usher in a new era of standardized audits, akin to SOC 2 or ISO certifications, intended to distinguish secure actors from those exhibiting reckless practices.
As AI becomes integral to companies seeking competitive advantages, the threat posed by hackers targeting AI models is on the rise. Recent findings indicate that thousands of malicious files have been uploaded to Hugging Face, a prominent repository for AI tools, potentially jeopardising the integrity of models utilised across various sectors, including retail, logistics, and finance.
Concerns regarding national security have also surfaced, with experts warning that inadequate security measures can expose proprietary systems to theft, highlighting a notable breach at OpenAI as a precedent. Should AI models be stolen, they risk being reverse-engineered or sold, undermining significant investments and eroding trust within the industry.
An AI model functions as a mathematical system trained on data that allows it to recognise patterns and make decisions, serving various purposes such as object identification in images or text generation.
The method developed by the North Carolina State University researchers involves positioning a probe near a Google Edge Tensor Processing Unit (TPU) to analyse electromagnetic signals and extract vital information regarding the model’s structure. This type of attack does not require direct access to the system, raising serious concerns for the security of AI intellectual property. Aydin Aysu, a co-author of the research and associate professor of electrical and computer engineering at North Carolina State University, emphasised the significance of safeguarding these AI models. He stated in a blog post, “AI models are valuable; we don’t want people to steal them. Building a model is expensive and requires significant computing resources. But just as importantly, when a model is leaked, or stolen, the model also becomes more vulnerable to attacks — because third parties can study the model and identify any weaknesses.”
In light of these findings, businesses may need to reconsider their approach to AI processing devices. According to tech adviser Suriel Arellano, “Companies might move toward more centralized and secure computing or consider less theft-prone alternative technologies.” However, he posited that a more probable scenario involves companies that derive substantial benefits from AI, particularly those operating in public settings, investing heavily in enhanced security measures.
Despite the vulnerabilities highlighted in the findings, AI also presents opportunities for bolstering security measures. It has been noted that artificial intelligence is playing a vital role in augmenting cybersecurity processes, facilitating automated threat detection and improved incident response through advanced pattern recognition and data analysis. Lenovo CTO Timothy E. Bates underscored this notion, elaborating on how machine learning systems can equip teams to predict and counter emerging threats effectively.
Source: Noah Wire Services