In recent developments surrounding the California Consumer Privacy Act (CCPA), the California Privacy Protection Agency (CPPA) has introduced proposed regulations governing the use of automated decision-making technology. This follows growing concerns about how such systems, which analyse or predict individuals' work capabilities, health, and preferences, are employed by businesses. The CCPA mandates that consumers have the right to opt out of these technologies and to access information regarding their implementation.
The proposed regulations have evolved from earlier drafts, with significant modifications intended to clarify and restrict the definition of what constitutes "automated decision-making technology." Initially, the CPPA suggested an expansive interpretation, wherein any system that "in whole or in part" facilitated human decision-making would fit this classification. However, the latest proposal narrows this down to technology that either replaces human decision-making or substantially influences it. An illustrative example provided clarifies that a scoring tool can only be deemed a key factor in a significant decision if it is a primary component of that decision.
The CPPA has also laid out specific requirements for risk assessments associated with automated decision-making systems. Companies are now expected to conduct granular risk assessments and share the necessary information, presented in "plain language", with other entities that utilise their AI technologies, specifically if such systems are trained on personal data.
In instances where a significant decision affects an individual—such as choices regarding education or employment—the proposed regulations stipulate a requirement for prior notification. This advance notice is intended to inform individuals before any significant decision is made, aligning with the CCPA’s original language that speaks to "legal or similarly significant" impacts.
Furthermore, these changes necessitate alterations to company privacy policies. Organisations will now be obligated to explicitly inform users of their right to opt out of any automated decision-making that could lead to significant outcomes. Additionally, businesses must elaborate on methods for individuals to access information regarding these automated systems.
The proposed regulations mark a response to initial apprehensions surrounding automated decision-making and are expected to have considerable implications for the application of AI technologies across various sectors. Companies, particularly those operating within human resources, are advised to familiarise themselves with these forthcoming obligations, which align closely with existing regulatory frameworks, such as New York City's AI law. The regulatory landscape is evolving steadily, accommodating the rapid integration of AI in business practices while striving to safeguard consumer rights and privacy.
Source: Noah Wire Services