The rapidly evolving field of artificial intelligence (AI) has taken centre stage in discussions surrounding technological advancements, particularly in relation to business automation and cybersecurity. Key issues are emerging as organisations look to incorporate generative AI into their operations, illustrating its potential benefits alongside rising cybersecurity concerns. Infosecurity Magazine has elaborated on the critical developments influencing AI's intersection with cybersecurity, marking significant events and trends projected for 2024.
A pivotal announcement came from the US National Security Agency, which, in concert with six government agencies from the US and other Five Eyes countries, released new guidance intended to secure AI deployments. This guidance delineates best practices throughout the three principal phases of AI system deployment.
In October 2024, the White House issued a National Security Memorandum focusing on AI, outlining essential actions for the federal government. This document aims to ensure the safe, secure, and trustworthy development of AI technologies, particularly in terms of countering the advancements made by adversarial nations in the AI domain.
Amidst these governmental efforts, a new threat actor known as NullBulge emerged. This hacktivist group made headlines in Spring 2024 after claiming responsibility for targeting AI-centric games and applications. Notably, in July, they announced the theft and leak of over a terabyte of data from Disney’s internal Slack channels. The group purportedly justified their activities as protective measures for artists globally against the encroachments of AI, while some analysts have suggested this claim may mask ulterior motives.
On the international front, the UK signed the Council of Europe AI Convention on September 5, 2024. This groundbreaking agreement, adopted by 46 member states earlier that year, represents the first legally binding framework on AI usage, aiming to oversee development and safeguard communities from potential harms stemming from AI technologies.
In a revealing collaboration, Microsoft and OpenAI released research confirming that generative AI, particularly large language models (LLMs) like ChatGPT, has been weaponised by nation-state threat actors. The research identified that groups from Russia, China, North Korea, and Iran have begun utilising generative AI to enhance social engineering attacks and probe unsecured devices and accounts. While these actors have not yet moved towards employing novel attack techniques, their current methodologies highlight a significant shift in cyber threats.
In a separate incident reflecting the darker potentials of AI, a North Carolina man faced charges in September for engaging in AI-generated music fraud on major streaming platforms such as Spotify and Apple Music. This notable case marks the first known criminal charge associated with AI-generated music, with the accused allegedly producing hundreds of thousands of songs and fraudulently streaming them through automated accounts, or bots.
Looking ahead, Google Cloud researchers have issued warnings about a potential escalation of AI-related threats in 2025. They predict an increase in the sophistication of social engineering schemes, including phishing efforts, as cybercriminals refine their use of AI and LLMs. Additionally, the continued exploitation of deepfake technology by cybercriminals and espionage actors for identity theft and fraud is also anticipated to rise.
These developments illustrate the dual-edged nature of AI's integration into business practices—offering efficiency and innovation while simultaneously posing profound cybersecurity risks that stakeholders must navigate in the evolving digital landscape.
Source: Noah Wire Services