In recent developments in the realm of artificial intelligence regulation, a new bill has emerged that proposes specific parameters under which AI models would be subjected to oversight. This initiative is particularly focused on models that require substantial computational resources for their training, with the threshold pegged at more than 10^26 floating-point operations (FLOPs). Furthermore, the legislation stipulates that the training costs must exceed $100 million. For context, the well-known AI model GPT-4 is estimated to have required 10^25 FLOPs, placing it just below the new proposed regulation's threshold.

The bill, still in its early stages, is anticipated to undergo many revisions as it garners attention from various stakeholders within the industry. A notable point of contention arises from companies like Hugging Face, which has publicly opposed similar regulations in the past. A spokesperson from Hugging Face indicated, “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” highlighting the push for a more nuanced approach to regulation.

Scott Kohler, a scholar at the Carnegie Endowment for International Peace, expressed that the discourse surrounding this legislation would benefit from clarifying the severity and imminence of potential harms associated with AI technologies. He stated, “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms.”

The ongoing conversation has garnered the attention of political figures, including assembly member Edward Ra from New York, who expressed openness to mandated safety plans for AI companies. Although he mentioned not having reviewed the new bill's draft, he noted, “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.”

Ra, along with co-chair Michael Bores, leads the New York Future Caucus, a group aimed at uniting younger lawmakers to address issues that will impact future generations. Their involvement is part of a broader initiative to instigate timely discussions on AI regulation.

Scott Wiener, a California state senator and the original sponsor of a previous bill (SB 1047) that did not pass, expressed optimism that his initial efforts have now sparked further legislative interest. He remarked, “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important.” Wiener intends to continue promoting regulatory efforts in California, professing, “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.”

Furthermore, the RAISE Act is expected to illuminate a significant contradiction within the industry, where many companies advocate for regulation yet demonstrate resistance when such proposals are tabled. Brennan, commenting on the implications of SB 1047, stated, “The bill became a referendum on whether AI should be regulated at all. There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.”

As the conversation around AI regulation continues to unfold, the industry remains poised to navigate the intricate dynamics of emerging technologies and their oversight.

Source: Noah Wire Services