The Consumer Financial Protection Bureau (CFPB) has recently released two critical pieces of guidance aimed at clarifying the applicability of the Fair Credit Reporting Act (FCRA) to contemporary employment practices, particularly as they are increasingly influenced by artificial intelligence (AI) technologies. The announcements, namely Consumer Financial Protection Circular 2024-06 and Consumer Financial Protection Circular 2023-03, address the use of algorithmic scores and background dossiers for hiring, promotion, and other employment decisions, specifically cautioning employers about potential compliance issues that may arise from such practices.
FCRA compliance is a fundamental aspect to consider when employers engage third-party consumer reporting agencies (CRAs) to obtain consumer reports for various employment-related purposes. Such purposes encompass hiring, promotions, disciplinary measures, reassignments, and employee retention. As defined, CRAs collect and analyse public consumer data, providing detailed reports for a fee, leading to the necessity of understanding when FCRA applies given the sources of data used.
To illustrate this, if an employer chooses to conduct a credit check on a job candidate, this action is generally covered under FCRA stipulations, given that the credit check must derive from a CRA source. Conversely, should an employer employ third-party software to produce internal reports summarising employees' sales performances, this may not necessarily contravene FCRA if the software solely evaluates internal company data, as such reports do not originate from a CRA.
However, the introduction of generative AI raises new complexities in determining FCRA applicability. Employers are increasingly opting for software that uses AI to generate employee performance reviews, but this may inadvertently lead them to breach FCRA regulations without realising it. Specifically, a multitude of factors must be considered, including:
Data Sources: Employers must assess all data sources utilised in training the AI tools they purchase. A situation may arise where reports generated are based on an employee's performance in conjunction with publicly available consumer data. In such cases, the vendor could potentially be classified as a consumer reporting agency, necessitating adherence to FCRA guidelines.
Vendor’s Intent: It is crucial for employers to determine whether the vendor intends for their reports to be used as “consumer reports.” As outlined in the case of Kidd v. Thompson Reuters Corp., if a vendor's tool is designed merely to assist in drafting reports that employers subsequently modify before delivery, then the vendor may not be considered a CRA.
Regulatory Considerations: The use of AI tools may trigger additional legal obligations beyond FCRA compliance, as seen in states like Colorado, which has implemented the AI Act. This legislation imposes a duty of care on users of "high-risk" AI systems to mitigate algorithmic discrimination against state residents.
These guidelines from the CFPB serve to underscore the necessity for employers to thoroughly evaluate the potential implications of employing AI and other decision-making tools, particularly with respect to their FCRA obligations. As firms navigate the complexities of integrating advanced technologies into their operational frameworks, caution is advised to ensure compliance with existing regulations as well as to avert potential liabilities stemming from misuse of such tools in employment practices. Employers are encouraged to consult with legal advisors to understand the scope and limitations of their responsibilities under these new advisories.
Source: Noah Wire Services