On December 17, 2024, the European Data Protection Board (EDPB) released Opinion 28/2024, focusing on the complexities surrounding data protection in relation to artificial intelligence (AI) models. This important opinion, commissioned by the Irish supervisory authority, addresses multiple critical areas including the anonymity of AI models, the appropriate legal basis for processing personal data, and the effects of unlawful processing during the development of these technologies.

A significant aspect of the EDPB's opinion is the evaluation of AI models that are trained using personal data. The Board asserts that such models cannot be assumed to be anonymous in every case. For a model to qualify as anonymous, there must be an insignificant likelihood of personal data being extracted, either directly or indirectly. The EDPB emphasizes that this determination requires a thorough, case-by-case analysis which accounts for all reasonably foreseeable methods that could be used by the data controller or others to re-identify individuals. Furthermore, a variety of methods are provided to demonstrate anonymity, including the minimisation of personal data collection and enhancing robustness against potential attacks.

In addition to examining anonymity, the opinion elaborates on the use of legitimate interest as a legal basis for processing personal data in the design and deployment of AI models. This framework is grounded in a detailed three-step test.

Firstly, the necessity to pinpoint a legitimate interest is outlined. According to the EDPB, an interest must meet three criteria to be considered legitimate: it must be lawful, clearly articulated, and present rather than speculative. Examples include creating an AI-driven conversational agent, enhancing threat detection capabilities, or identifying fraudulent activities. The Board underscores that the assessment of legitimacy must be meticulously evaluated within the specific context of the processing.

The second component involves assessing whether the processing of personal data is necessary for pursuing the identified legitimate interest. This involves determining if the intended interest can be achieved without processing personal data, thereby adhering to the principle of data minimisation. The Board accentuates that processing should be commensurate with the legitimate interest being pursued.

The final part of the framework is a balancing test designed to weigh the legitimate interest against the fundamental rights and freedoms of the data subjects. In conducting this analysis, the EDPB urges an examination of the interests of data subjects themselves, the potential positive and negative consequences of data processing, and their reasonable expectations regarding the use of their personal information. Furthermore, in cases where the interests of data subjects overshadow the legitimate interests, controllers are advised to implement mitigating measures such as enhanced transparency, pseudonymisation, and data minimisation.

The EDPB also highlights the ramifications of unlawful processing during the development of AI models. It outlines three scenarios pertaining to the status of personal data retained within an AI model:

  1. In cases where a controller retains personal data and processes it for separate purposes, the legitimacy of this processing hinges on the legal foundation of the initial data use.

  2. If the AI model’s data is managed by another controller, an assessment must be made as to whether personal data was initially processed unlawfully, which could affect the ongoing use of the model.

  3. Lastly, should an AI model undergo anonymisation after unlawful processing, the GDPR does not apply to its subsequent operation, although any personal data collected during deployment remains within the framework of GDPR compliance.

The EDPB's guidelines further enumerate best practices to be adopted by businesses employing AI models. These include demonstrating anonymity through robust data protection measures, thoroughly conducting the legitimate interest assessment outlined in the three-step framework, and implementing adequate mitigating measures to diminish risks to data subjects. Moreover, maintaining comprehensive documentation of processing activities and ensuring ongoing compliance with the GDPR are advocated as essential practices for promoting responsible AI innovation while safeguarding the rights of data subjects.

Source: Noah Wire Services