Generative AI is rapidly gaining traction as a transformative technology for businesses, promising significant cost reductions and a competitive edge. However, as companies explore the potential of this advanced tool, they are also faced with the necessity of navigating complex risks associated with its implementation. This duality presents both an opportunity and a challenge for professional service firms seeking to integrate AI into their operations.
Sean Clifford, Vice President and Financial Institutions Cyber Lead at BHSI, emphasized the expectations of clients in the professional services sector during a recent discussion with Risk & Insurance. “When operating as a professional service firm, your customers look to you for guidance and rely on your expertise,” Clifford explained, highlighting the essential role of both human and AI-driven support in delivering services. He cautioned about the implications of errors or inaccuracies in the use of AI, which could expose firms to liability risks.
The professional liability connected to generative AI is significant, particularly as it can operate autonomously in providing services. Should an AI system produce erroneous or biased outputs, it may result in harm to individuals or organisations, potentially leading to malpractice claims. For instance, Clifford cited the legal field as a sector particularly vulnerable to these risks. He noted that an AI-powered tool might incorrectly cite case law, while in finance, algorithmic biases could unfairly influence important decisions like loan approvals.
As companies seek to leverage the advantages presented by generative AI, they are urged to incorporate risk assessments into their strategies. Insurers are beginning to evaluate the unique exposures associated with AI, underscoring the importance of understanding industry-specific risks. Clifford advised the integration of human oversight to validate outputs from AI before they reach customers, thereby mitigating overreliance on technology.
Cybersecurity is another critical area requiring attention when adopting generative AI. Clifford pointed out that such technologies could introduce new vulnerabilities that malicious actors might exploit. “The adoption of any new technology, including generative AI, can potentially create new attack surfaces that threat actors can exploit,” he asserted. He elaborated on the phenomenon of prompt injection, where savvy threat actors might manipulate AI chatbots into divulging sensitive information, further complicating the security landscape.
Developers are tasked with anticipating potential vulnerabilities and implementing effective safeguards to prevent unauthorized data disclosures. Moreover, cybercriminals can leverage AI for various illicit purposes, including conducting reconnaissance, enhancing phishing schemes, and exploiting software vulnerabilities.
In their pursuit of innovation, businesses must tread carefully, especially regarding consumer data protection. Clifford warned that, as companies collect data to feed AI models, they risk unintentionally exposing sensitive information. He likened this to “when fishermen cast their nets — other things are bound to get caught up in the process.” With increasing regulations surrounding data privacy, including consumer rights to have their data ‘forgotten,’ firms face the challenge of managing and safeguarding sensitive information within AI systems.
To effectively adopt generative AI while remaining mindful of the associated risks, Clifford recommended organisations take several strategic steps. First, establishing cross-functional committees to oversee AI integration and to frame best practices is pivotal. These committees would be instrumental in evaluating potential AI use cases and guiding pilot projects before broader deployment.
Secondly, organisations should mandate human oversight whenever AI is used, ensuring that there are checks in place to verify AI outputs. Lastly, staying informed about advancements in AI technology and evolving legal requirements is crucial. Collaborating with external stakeholders, such as legal advisors, law enforcement, and industry peers, can help organisations identify trends and emerging risks.
Clifford concluded with a long-term outlook for businesses venturing into AI adoption, asserting that thoughtful and pragmatic implementation is essential. “I believe the companies that take a long-term view will be the most successful,” he stated, emphasising the transformative potential of this technology. As businesses strive to navigate the intersection of opportunity and risk presented by generative AI, a vigilant and structured approach will be crucial to their success.
Source: Noah Wire Services