In a significant advancement for the regulation of artificial intelligence, the European Commission's Artificial Intelligence (AI) Act came into force on August 1, 2024, establishing a comprehensive legal framework for AI systems across the European Union. This pioneering legislation categorises AI systems into varying risk levels — minimal, limited, high, and unacceptable — each subject to different compliance requirements. For businesses operating within the EU or engaging with EU-based AI services, understanding these regulatory changes is crucial.
The AI Act delineates minimal-risk systems, such as spam filters and AI-enabled games, from those deemed high-risk, like AI-based medical software and recruitment tools. While minimal-risk systems are largely exempt from strict regulations, high-risk systems must adhere to stringent standards. These include implementing robust risk mitigation measures, ensuring the integrity of datasets, providing clear user information, and maintaining human oversight. Systems classified under the unacceptable risk category face outright prohibition, encompassing practices like subliminal messaging and emotion inference in workplace settings.
Among the notable stipulations of the AI Act is the requirement for providers of general-purpose AI (GPAI) to maintain thorough technical documentation, respect EU copyright laws, and disclose the training data used for their systems. Furthermore, providers must ensure transparency in interactions with users, clearly informing them when they are engaging with an AI system. Companies found to be non-compliant can incur substantial penalties, amounting to up to €35 million or 7% of global turnover, whichever is higher, for violations linked to prohibited practices.
In parallel developments, discussions surrounding AI regulation in the United Kingdom are intensifying. Following the King’s Speech on July 17, 2024, Prime Minister Sir Keir Starmer has indicated a shift towards more comprehensive legislation governing powerful AI models. Although specific details of an upcoming AI Bill remain undisclosed, reports suggest that key focus areas will include solidifying existing voluntary agreements between the tech sector and the government into binding laws and granting more autonomy to the UK's recently established AI Safety Institute (AISI). This institute was launched to address risks and vulnerabilities in AI models through rigorous testing and research.
The UK previously hosted significant AI summits, including the AI Safety Summit in November 2023 at Bletchley Park, where tech businesses and governmental representatives signed non-binding agreements aimed at risk-testing new models before market release. The following AI Summit co-hosted with South Korea in May 2024 continued the trend of voluntary commitments related to the responsible development of AI technologies.
As the UK government prepares to disclose the specifics of its AI regulatory framework, businesses are urged to assess the potential implications of the AI Act as well as forthcoming domestic regulations. The government’s “AI Opportunities Action Plan” aims to explore how AI can catalyse economic growth, marking an initial step towards formal strategy development.
For enterprises navigating this evolving landscape, legal expertise is advisable to ensure compliance with both the EU’s AI Act and the anticipated UK regulations. As developments unfold, including the expected unveiling of the AI Bill followed by a public consultation, companies will need to remain vigilant and adaptable to the shifting regulatory environment surrounding artificial intelligence.
Source: Noah Wire Services