2 min read

California's Controversial AI Safety Bill Could Become Law

California's Controversial AI Safety Bill Could Become Law

California's AI safety bill, SB 1047, has moved one step closer to becoming law, and it’s sparking heated debates in the tech industry. The bill, which has already cleared both the California State Assembly and Senate, now awaits a final process vote before heading to Governor Gavin Newsom’s desk for potential approval.

The Bill at a Glance

SB 1047 proposes several safety measures for AI companies operating in California, requiring them to implement protocols before training advanced AI models. The key provisions include:

  • A mechanism to quickly shut down a model in case of a safety breach.
  • Protection against unsafe post-training modifications.
  • Rigorous testing procedures to evaluate potential critical harm risks.

While the measures seem straightforward, opinions on their impact are divided.

A House Divided

The AI industry is split over SB 1047. OpenAI opposes the bill, citing concerns that it may stifle innovation. Anthropic, initially a critic, has warmed to the legislation after suggesting amendments. Even AI experts are divided. Prominent voices like Andrew Ng and Fei-Fei Li argue the bill’s focus on catastrophic harm risks going too far, potentially hindering innovation, especially in open-source AI development. On the other side, experts like Geoff Hinton support the bill as a necessary step toward AI safety.

Beyond California’s Borders

The bill’s potential influence extends well beyond California. Marketing AI Institute founder Paul Roetzer pointed out on Episode 113 of The Artificial Intelligence Show that the bill could impact any company that does business in California, not just those headquartered there. Given the size of California’s economy, SB 1047 may reshape the AI landscape for businesses across the country.

Corporate America’s Growing Concerns

The uncertainty surrounding AI regulation is already affecting the business world. According to recent SEC filings, 27% of Fortune 500 companies listed AI regulation as a potential risk. Concerns range from higher compliance costs to possible revenue losses, leading some companies to develop their own internal AI guidelines in anticipation of future laws.

Roetzer notes that if SB 1047 passes, it won’t just be AI developers who are affected. “The CMO [for instance] is all of a sudden going to have to care about this law,” he says, underscoring how regulatory changes could ripple through various departments in a company.

Unintended Consequences

If the bill becomes law, it could slow the development of new AI models. Roetzer suggests the additional safety checks and potential regulatory interventions could extend model development cycles from 8-12 months to 18-24 months. To adapt, companies might pivot to releasing smaller, incremental updates rather than unveiling large new models. Additionally, some AI companies may voluntarily join federal initiatives to use government backing as a shield for ongoing development efforts.

New call-to-action

The Regulation Dilemma

At the core of SB 1047 is the delicate balance between fostering innovation and ensuring safety. The bill sets out to regulate AI based on model size and training methods, but in an industry evolving as fast as AI, today’s concerns may quickly become outdated. Some experts argue that regulation should target AI applications rather than the models themselves.

AI expert Andrew Ng, in a TIME editorial, compared AI models to electric motors, suggesting that regulating the technology behind AI is less effective than focusing on its applications. "A motor can power a blender, a dialysis machine, or a bomb," Ng explained. "It's more sensible to regulate how the technology is used rather than the technology itself."

What Now?

As SB 1047 approaches its final vote, the debate over how to balance AI innovation with safety is intensifying. The bill, if passed, could set a precedent for AI regulation across the country, potentially reshaping how AI companies operate—not just in California, but globally. Businesses, regulators, and AI developers will be watching closely to see what happens next, as the future of AI safety and innovation hangs in the balance.

Possible New York Times Lawsuit Against OpenAI

Possible New York Times Lawsuit Against OpenAI

In a recent development, The New York Times (NYT) has updated its terms of service to prevent AI companies from using its content to train AI models....

Read More
AP Establishes Guidelines for Journalistic Use of Generative AI

AP Establishes Guidelines for Journalistic Use of Generative AI

The Associated Press (AP) has introduced a set of guidelines for its journalists regarding the responsible utilization of generative artificial...

Read More
Wix Studio: you can build a website with ai (for real)

Wix Studio: you can build a website with ai (for real)

Wix, a renowned website builder platform, has been revolutionizing the way individuals and businesses create stunning websites with its user-friendly...

Read More