In July 2024, the finalized text of the European Union’s AI Act was officially published in the EU's Official Journal, culminating a four-year legislative endeavour that positions the Act as the world’s first binding regulation on artificial intelligence. Grace Nelson, an analyst in technology and telecommunications with a background in Media and Communication Governance from LSE, has discussed the considerable implications of this Act for the EU and beyond. The introduction of the AI Act in 2021 has sparked similar initiatives in at least eight other nations, including Canada, Brazil, Mexico, and Vietnam, all of which adopt the EU's risk-based regulatory approach.
The risk-based framework central to the AI Act assigns tiered obligations to AI developers and deployers according to the potential risks their technologies may pose to the public. This structure aims to protect consumers by addressing foreseeable harms linked to AI usage, including economic, social, or civic risks. However, despite the significant advances in understanding artificial intelligence since the mid-20th century, there has been a notable delay in establishing a consensus on how to regulate such technologies effectively, mainly emerging only in recent years.
The AI Act reflects a broader move by the EU to ensure consumer protection across its regulatory frameworks, following foundational laws like the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA). Both the AI Act and its predecessors focus on minimising potential harms, yet critics suggest that this approach is limited to predictable risks, neglecting to encompass unknown or emergent threats that technologies may pose in future contexts.
As the European Commission embarks on a mission to encourage AI adoption and the integration of other emerging technologies, such as quantum computing, the limitations of a risk-based regulatory approach become increasingly evident. While a principles-based governance model would ideally allow for adaptive regulation toward ill-defined harms, the AI Act instead confines itself to a finite array of restrictions, thereby potentially stifling innovative advancements that could yield positive societal outcomes.
The Act defines categories of unacceptable risks, prohibiting certain use cases such as social scoring systems and real-time biometric identification. However, Nelson highlights that it predominantly revolves around compliance measures, requiring AI developers to document their adherence to the law while lacking the capability to enforce significant penalties for breaches. Consequently, this regulatory environment may inadvertently incentivise risk-taking behaviours that operate within compliance limits yet remain unregulated.
The regulatory framework’s reliance on an individualistic model of risk and consumer autonomy further constrains its effectiveness. While it acknowledges systemic risks, it often fails to address collective harms that may emerge from algorithmic discrimination, leaving significant public policy challenges unaddressed. Citing analyst Julie Cohen's insights, the legislation's method may lead to the exclusion of uncertainties from legal consideration, consequently contributing to a landscape where the dominant forces of the digital economy may exploit regulatory vacuums.
International reactions to the EU's regulatory framework reveal contrasting approaches to AI governance. For instance, countries like the United Kingdom and Singapore are favouring voluntary governance mechanisms, emphasising economic growth while placing less emphasis on binding safety regulations. In light of these developments, there is a growing call for a more positive vision of technological regulation aligned with public investment and equitable technological capabilities.
Stakeholders from leading technology firms, such as Meta, are advocating for the rollback of existing regulations to promote innovation, indicating a push for greater autonomy in shaping technological trajectories amidst the EU's regulatory efforts. This backdrop creates a landscape ripe for discussion about the objectives of innovation and who ultimately benefits from technological advancements.
As the EU continues its process of integrating industrial policies with digital regulations, it is faced with an opportunity to assess the impact of these policies on public welfare. The urgent question remains regarding the intended beneficiaries of innovation, suggesting that if the EU aspires to champion justice for its constituents, a significant re-evaluation of its regulatory alignment and policy frameworks may be necessary. The future of AI governance, alongside emerging technologies, hinges upon this alignment, with the potential to redefine the digital landscape in service of a more equitable society.
Source: Noah Wire Services