On November 8, 2024, the California Privacy Protection Agency (CPPA) took a significant step by voting to advance proposed regulations focusing on automated decision-making technology (ADMT). As the comment period remains open, businesses are encouraged to begin evaluating the potential impacts of these regulations should they be finalised in their current form. This development presents a crucial opportunity for companies to understand the definition and implications of ADMT as outlined in the proposed regulation.
The proposed regulations define automated decision-making technology as any technology that processes personal information to execute, replace, or substantially facilitate human decision-making. This definition is vital as it sets the framework through which businesses must assess their operations in relation to the regulations.
A key aspect of the definition is its emphasis on personal information, which is broadly defined under the California Consumer Privacy Act (CCPA). However, it is important to note that the CCPA does include several exceptions, such as de-identified or aggregate consumer information, which do not constitute personal information. Additionally, protected health information governed by the Health Insurance Portability and Accountability Act (HIPAA) is excluded from this classification. By understanding these exceptions, businesses can better evaluate the potential effects of the proposed regulations on their operations. For example, technology that assists with claims processing for HIPAA-covered health plans might not fall under the regulations.
The proposed regulations also stipulate what it means to “substantially facilitate human decision-making.” Similar to previous AI regulations in jurisdictions like New York City and Colorado, this terminology suggests that if a technology's output is a significant factor in a human's decision-making process, it is deemed to be substantially facilitating that process. For instance, using automated decision-making technology to generate a score about a consumer that a human reviewer regards as a primary factor in making a significant decision will fall under this definition. It is important to note that the term "a" primary factor indicates that the technology may influence the decision without needing to be the only consideration.
Furthermore, profiling clearly falls under the auspices of ADMT as defined by the proposed regulations. Profiling is defined as any form of automated processing of personal information to evaluate personal aspects of an individual, including intelligence, ability, performance at work, and even health considerations. Over recent years, many employers have begun to incorporate various devices and applications that can be classified as technologies under this regulation. These technologies support businesses in sourcing, recruiting, monitoring, and assessing the performance of employees and applicants. Examples include the use of dashcams in company fleets aimed at promoting safety and performance, as well as performance management platforms used to evaluate employee productivity.
The regulations also delineate technologies that do not meet the criteria for ADMT. These include basic IT functions such as web hosting, data storage, firewalls, and various common software applications that do not involve decision-making processes. However, businesses need to apply these exceptions judiciously. For instance, using a spreadsheet to conduct regression analyses on managerial characteristics for promotion decisions would qualify as utilizing an ADMT, while merely tabulating final performance evaluation scores would not.
As regulatory frameworks surrounding AI and automated decision-making evolve, especially under the CCPA, organizations employing these technologies must remain vigilant and monitor these developments. The CPPA's regulations potentially reshape the landscape of AI utilisation in business, creating an environment where transparency and accountability become imperative for compliance.
Source: Noah Wire Services