The landscape of artificial intelligence (AI) is currently undergoing significant transformation, prompting discussions around the need for improved regulations within the sector. According to insights from industry experts reported by Accounting Today, while AI technology continues to evolve rapidly, there are growing calls for measures that enhance both transparency and accountability in AI systems.

At the forefront of these discussions is the recommendation for clear labelling of AI-generated content. Experts are advocating for capabilities that allow users to trace the decision-making processes of AI models and to disclose the underlying data and algorithms utilized. This is particularly critical in sectors such as accounting, where precise understanding of AI-driven decisions can greatly influence audit quality.

Mike Gerhard, chief data and AI officer at BDO USA, remarked, "An AI regulation that emphasizes transparency in the training of large language models would be highly beneficial." He highlighted that clarity about the training processes, data sources, and methodologies is essential for establishing accountability and trust in AI systems. Gerhard's sentiments reflect a broader consensus on the importance of transparent practices, especially in professions that rely heavily on data integrity.

Responses from industry professionals reveal a preference for regulations that adopt principles-based or risk-based approaches, similar to the EU AI Act. These frameworks focus on safety, fairness, and non-discrimination while still allowing innovation to flourish. Pascal Finette, founder and CEO of Be Radical, emphasised the urgency of addressing ethical concerns linked to AI's potential biases, particularly when it comes to tasks involving hiring or credit evaluations. He stated, "Part of this problem is on the vendor side, but part of this ought to be codified (and thus protected) by law."

Despite the growing discourse on governance, many industry leaders cautioned against overly stringent regulations that could hinder innovation in its early stages. Avani Desai, CEO of Schellman, expressed this sentiment, stating, "As further governance emerges, I hope we don't see overly restrictive rules that stifle creativity and progress." Desai advocated for regulations that ensure ethical AI use while promoting innovation through public-private partnerships and feedback mechanisms.

Looking forward to 2025, experts expressed uncertainty about the specific trajectory of AI regulation, noting that it is difficult to predict how developments will unfold. Abigail Zhang-Parker, an accounting professor at the University of Texas at San Antonio, suggested that while the cost of engaging with AI will continue to decrease, the industry may also witness an increase in AI-related incidents that highlight ethical concerns, stating, "AI's capability will continue to evolve."

A significant shift is anticipated in how AI integrates into business workflows, with a rise in autonomous AI agents expected to enhance productivity and efficiency. Jack Castonguay, a Hofstra University accounting professor, articulated concerns about the long-term implications of this trend on employment, suggesting that major accounting firms might reduce hiring or further downsize as they leverage AI capabilities. He noted, "I'm also quite confident we'll see a scandal where a firm misuses AI or subjugates its judgment to AI that leads to a fraud or material error getting through an audit."

In summary, as AI technologies mature, the discourse around regulations is likely to intensify, with a focus on striking a balance between accountability and innovation. The nuances of this ongoing debate will continue to shape the future of AI in business and its associated impacts on various professional landscapes.

Source: Noah Wire Services