A study conducted by Software AG, a Germany-based multinational software corporation, reveals significant insights into trends surrounding the integration of artificial intelligence (AI) within business environments. The survey, which encompassed 6,000 knowledge workers from the U.S., U.K., and Germany, highlights a substantial prevalence of "Shadow AI" practices among employees. This term refers to the use of unsanctioned or ad-hoc AI tools in professional settings.
The findings indicate that 75 percent of knowledge workers are engaging with AI on a daily basis. Interestingly, nearly half of those surveyed, specifically 46 percent, expressed that they would continue to utilise personal AI tools, regardless of any organisational prohibitions. This situation presents a pressing challenge for businesses as they seek to balance the productivity advantages offered by these tools against the risks associated with their use, including data leakage, cyberattacks, and risks of regulatory violations.
Steve Ponting, Director at Software AG, commented on the current landscape, stating, "If 2023 was a year of experimentation, 2024 will be defined as the year that GenAI took hold." He emphasised the need for businesses to proactively manage the implications of increasing AI usage, highlighting that, "As usage increases, so does the risk of cyber attacks, data leakage, or regulatory non-compliance. Consequently, business leaders need to have a plan in place for this before it’s too late."
The report also sheds light on workers' perceptions of AI. A significant portion of respondents see these tools not merely as conveniences but vital components of their careers, with 47 percent believing that AI can hasten their promotions. Additionally, the research indicated a notable "AI utility gap," as 53 percent of employees preferred using personal AI options for enhanced independence, while 33 percent claimed their organisations' IT departments did not supply adequate tools.
The study highlighted several key concerns raised by workers:
- 72 percent voiced cybersecurity apprehensions
- 70 percent pointed to data governance issues
- Just 27 percent regularly conducted security scans on their tools
- Only 29 percent reviewed data usage policies
In light of these findings, the report suggests that organisations must focus on safely integrating AI into their operations. This could be achieved by providing employees with appropriate tools along with robust training programmes, thereby minimising risks while enhancing the advantageous aspects of AI. J-M Erlendson, Global Evangelist at Software AG, stated, “Shadow AI is not going anywhere, but it is supercharging the operational chaos already engulfing many organizations.” He advocated for the establishment of a transparent framework around processes, coupled with an understanding of employees' tool preferences and training needs, as crucial steps toward effective incorporation of Shadow AI.
The report indicates that the reliance on AI tools in the workplace is expected to escalate, with experts predicting that as much as 90 percent of workers will be dependent on these technologies in the near future. Companies that fail to adapt may find themselves more vulnerable to data breaches, cybersecurity threats, and exacerbated operational issues.
Source: Noah Wire Services