Oregon Attorney General Ellen Rosenblum has released guidance aimed at assisting businesses in the state as they navigate the increasing integration of artificial intelligence (AI) into their operations. This advice, developed by attorneys from the Oregon Department of Justice, clarifies how existing laws can apply to the rapidly evolving landscape of AI, which has gained prominence in recent years.

Artificial intelligence is recognised for its ability to efficiently perform tasks such as transcribing and summarising extensive data. However, it also introduces potential risks. Notably, Rosenblum highlighted how criminals have exploited AI technologies to perpetrate scams, including generating fake audio that mimics the voice of a kidnapped individual or creating fraudulent video endorsements featuring celebrities.

“Artificial Intelligence is already changing the world, from entertainment to government to business,” Rosenblum stated in a recent announcement. She emphasised that, despite the novelty of machine-learning platforms, they are not exempt from existing legal frameworks.

The issued guidance follows a series of legislative actions in Oregon aimed at establishing safeguards against the misuse of AI. This includes the passing of Senate Bill 1571, which mandates that political campaigns disclose the usage of AI when manipulating images, videos, or audio, inclusive of deepfake technology, to influence voters.

In 2023, Governor Tina Kotek also formed an advisory council to steer the state's initiatives regarding artificial intelligence and to make pertinent recommendations. Furthermore, Rosenblum, along with a coalition of other attorneys general, has advocated for protective measures for children from AI misuse, particularly concerning enhanced images that could misrepresent children's likenesses or voices.

The guidance outlines that several existing laws in Oregon are applicable to AI operations. The Unlawful Trade Practices Act protects consumers by prohibiting misleading representations. For instance, a business employing chatbots must ensure that the information provided is accurate and not deceptive. Additionally, if a company were to use AI to produce a video falsely depicting a celebrity endorsing its product without consent, it would violate these laws.

Moreover, businesses utilising AI for automatic pricing must also adhere to regulatory standards, such as avoiding price gouging during emergencies when essential supplies are in high demand. Oregon's consumer privacy legislation enables consumers to retract consent for their data's use in AI models, as well as to opt out of decision-making processes related to housing and education managed by AI systems.

AI models, consisting of vast datasets, can make predictions based on patterns, raising concerns about potential biases. Rosenblum pointed out that protections against discrimination remain robust, whether decisions are made by machines or humans. The Oregon Equality Act mandates equitable access to housing and public services, suggesting that an AI-driven mortgage approval model that systemically denies loans to qualified applicants based on race or neighbourhood could infringe upon legal standards.

“The regulation of AI is clearly a work in progress,” Rosenblum remarked. She further noted that as new legislation is anticipated in the 2025 Oregon legislative session, this guidance may require updates to reflect such changes, alongside alterations in federal law concerning artificial intelligence. This information aims to support businesses in their considerations regarding the implementation of AI technologies in their operational frameworks within Oregon.

Source: Noah Wire Services