As businesses increasingly embrace artificial intelligence (AI) to enhance their operations and maintain a competitive edge, they face emerging challenges associated with the unregulated use of these technologies. A particular area of concern is the phenomenon known as 'shadow AI', which refers to the unauthorised application of AI tools within organisations without the oversight of IT or security departments.
The use of shadow AI can be witnessed in various forms, ranging from a developer employing ChatGPT to assist in coding tasks to a salesperson utilising an AI-powered meeting transcription tool without going through the official channels. The prevalence of such tools presents significant security risks due to their lack of proper oversight and security controls, which can jeopardise sensitive company information.
Identifying and managing shadow AI poses unique challenges. Unlike traditional shadow IT, where unauthorised applications can often be pinpointed through network monitoring—tracking IP addresses and domain names—shadow AI tools often integrate seamlessly into approved business applications through copilot features. This means they could easily escape detection since they share the same IP address or domain with legitimate applications.
Moreover, employees might opt to use standalone AI solutions linked to personal accounts. While not directly connected to corporate infrastructure, this still raises concerns, as employees may inadvertently input confidential data into these channels, increasing the risk of data leaks.
The implications of shadow AI are stark. Research has highlighted that approximately 15% of employees may inadvertently expose company data to such tools. As generative AI models learn from user interactions, there is a potential risk that sensitive information could be disseminated to unauthorised users or be misrepresented, leading to misinformation.
To address these challenges, companies like Reco are emerging with solutions aimed at detecting and cataloguing shadow AI usage within their SaaS environments. Reco employs a series of methodologies including Active Directory integration to compile approved applications, as well as analysing email metadata to identify communications with unapproved tools. Additionally, its generative AI module leverages natural language processing to clean and match identities with the corresponding applications, ultimately producing an inventory of both sanctioned and unsanctioned AI tools.
Once shadow AI tools are identified, Reco provides a detailed analysis of several factors, including which SaaS applications are currently in use, which are incorporating AI assistants, and the identities of users accessing these tools, along with their respective permission levels. The system allows companies to understand connections between applications and helps in managing identity and access governance via a centralized platform.
Despite its capabilities, Reco operates in a read-only capacity and does not enforce direct policy changes. For instance, it cannot prevent employees from entering sensitive data into unauthorised applications or block the use of shadow AI tools. Instead, its purpose is to enhance visibility and alert security teams to potential vulnerabilities, enabling them to act on detected risks appropriately.
With the rise of shadow AI, businesses need to balance the advantages that AI applications can offer while carefully managing the associated security challenges. Through tools like Reco, organisations can gain insights into their SaaS environments and better understand the implications of the AI technologies that have become an integral part of modern business practice.
Source: Noah Wire Services