Martin Falbisoner, CC BY-SA 3.0, via Wikimedia Commons

###

This story was originally published by CalMatters. Sign up for their newsletters.

###

House Republicans moved to cut off artificial intelligence regulation by the states before it can take root, advancing legislation in Congress that, in California, would make it unlawful to enforce more than 20 laws passed by the Legislature and signed into law last year.

The moratorium, bundled in to a sweeping budget reconciliation bill this week, also threatens 30 bills the California Legislature is currently considering to regulate artificial intelligence, including one that would require reporting when an insurance company uses AI to deny health care and another that would require the makers of AI to evaluate how the tech performs before it’s used to decide on jobs, health care, or housing.

The California Privacy Protection Agency sent a letter to Congress Monday that says the moratorium “could rob millions of Americans of rights they already enjoy” and threatens critical privacy protections approved by California voters in 2020, such as the right to opt out of business use of automated decisionmaking technology and transparency about how their personal information is used.

If passed, the law would stop legislative efforts in the works nationwide. Lawmakers from 45 states are or have considered nearly 600 draft bills to regulate artificial intelligence this year, according to the Transparency Coalition, a group that tracks AI policy efforts by state lawmakers and supports legislation to regulate the technology.

The measure was introduced by Congressman Brett Guthrie, a Republican from Kentucky and chair of the House Energy and Commerce committee who said the legislation is necessary to resolve a patchwork of state regulation. On Wednesday, members of Congress in that House committee voted 30-24 along party lines to approve the budget reconciliation bill that includes the moratorium on state AI regulation. It will now advance to the House floor and potentially the Senate, where some observers say it faces an uphill battle against rules that limit policy changes in budget proposals.

As written, the moratorium would lift after 10 years. But it would have plenty of impact in the meantime, said Ben Winters, an attorney for the Consumer Federation of America. In California, he thinks the legislation could halt efforts by the Privacy Protection Agency to regulate automated decisionmaking, prevent enforcement of laws to protect voters from deepfakes and short circuit draft bills aimed at protecting people from discrimination and landlords who use AI to raise rent prices.

“If this bill were to pass, California couldn’t protect its citizens from exactly those harms,” he said.

Companies and lobbyists are attempting to use Washington D.C. to undermine California’s legislative AI leadership, said state Sen. Josh Becker, a Democrat from Menlo Park, in the heart of Silicon Valley. Becker has authored or coauthored a number of bills regulating AI, including one signed into law that requires AI makers provide tools to consumers so they know when generative AI is in use and another that seeks to prevent health insurance companies from unfairly using AI to deny people health care.

“If this bill were to pass, California couldn’t protect its citizens from exactly those harms.”
— Ben Winters, attorney for the Consumer Federation of America, on AI harms like deepfakes, discrimination and algorithmic price setting.

“This is an effort to tell people when something was created by AI, and so if this gets delayed for a year or two or 10 it’s going to have really negative consequences,” he said.

What’s unclear, he said, is exactly what regulation is covered by the moratorium. Would it, for example, wipe out privacy protections that Californians enjoy, and which were targeted by Congressional legislation last year? And how would it affect a bill Becker authored that gives people a way to quickly delete personal information collected by data brokers, set to go into effect next January?

“If they preempt that, it’s really negative for the country,” he said. “We’re [California] big enough that we can influence the country on our own, but if they preempt what we’re doing then it’s up to the federal government who has been unable to act on these issues.”

Momentum for AI harms — and, possibly, for curbs on regulation

For all the worry, the moratorium is dead on arrival if it reaches the U.S. Senate, said Gus Rossi, director of public policy and strategy at Omidyar Network, which funds public interest AI projects and tracks AI regulation.

That’s because a federal regulation known as the Byrd rule requires that budget reconciliation bills be related to fiscal matters, and, in Rossi’s reading at least, a 10-year moratorium on AI regulation doesn’t fit that definition. But Rossi still thinks people should take it seriously, arguing that it’s an attempt by House Republicans to establish a marker on what they think should be the approach to AI legislation, and a sign of things to come.

“The action is in the states, not D.C.,” he said. “That’s why some people in D.C. are trying to stop states.. particularly California, who’s leading the pack.”

If this bill or a similar one in the future passes, Rossi expects it would get challenged in court and put a chilling effect on efforts to regulate AI by state lawmakers.

It’s unclear whether it’s legal for the federal government to make a blanket moratorium on state regulation, said Winters, who worked in the U.S. Department of Justice during the Biden administration.

He agrees that the Byrd rule means the bill is unlikely to pass if it reaches the U.S. Senate, though Republicans may connect it to a $500 million plan to invest in AI for federal agencies and argue that it’s essential to limit state regulation in order to carry out certain budget provisions.

The House bill makes exceptions for states to continue enforcing some laws related to AI, such as laws that enable more use of AI or that are intended to improve government efficiency. It’s reasonable to interpret that exception to mean states like California could continue enforcing privacy law if this bill passed, said Amba Kak, codirector of The AI Now Institute, a research and equitable AI advocacy organization. But that’s risky.

“We can’t count on the fact that courts will see it this way, especially in the context of an otherwise sweeping moratorium with the clear intention to clamp down on AI-related enforcement,” she said.

A House AI task force spent years discussing areas of bipartisan agreement and possible bills to pass to regulate AI, New York Democrat Alexandria Ocasio-Cortez said in a hearing about the moratorium, but Congress was unable to pass any of that legislation. During that time, people committed suicide from their interactions with chatbots and kids and teens were harmed by falsely generated sexually exploitative deepfakes, and so states decided to act to do things like force AI chatbots to protect the private information of people seeking mental health care in Utah and require chatbots include a protocol for when someone expresses the desire to commit self harm in New York.

“All of these protections are protections that Congress refuses to take up, refuses, and so states are taking up this responsibility,” Ocasio-Cortez said. “Let states protect people. A moratorium is a deeply dangerous idea at this moment.”

“All of these protections are protections that Congress refuses to take up.”
— Alexandria Ocasio-Cortez, Democratic congresswoman, on state AI regulations

Congresswoman Doris Matsui, a Democrat from the Sacramento area, echoed Ocasio-Cortez at the hearing, saying, “We can’t shoot ourselves in the foot by stopping the good work states have done and will continue to do.”

Supporters of the moratorium identify different sorts of harm if it doesn’t pass. A patchwork of state regulations of AI “is the fastest way to secure Chinese dominance of AI,” said Jay Obernolte, a Republican from California and co-chair of the House AI task force. He supports a moratorium and preemption of state law by federal law, and if Congress fails to act, he said the people it will hurt most are entrepreneurs who can’t afford to follow regulatory regimes passed by different states.

“The most destructive thing is if there’s fear out there that every few years as the winds of political fortune shift, the rules governing the use of AI completely change,” he said during the hearing.

Broader pushback against AI regulation

The proposed moratorium is in line with efforts to prevent regulation of AI by President Donald Trump and Vice President J.D. Vance, who say such regulations will stifle innovation. A White House plan to promote growth of the AI industry and likely reduce regulation is due out by this summer. Companies like Amazon, Google, Meta, and big businesses who use AI have lobbied in Sacramento and Washington D.C. to prevent regulation of the technology.

Guthrie’s proposal comes a few days after Sen. Ted Cruz, a Republican from Texas, pushed for “light touch” AI regulation to ensure the United States maintains AI supremacy over other nations and to, in Cruz’s words, “prevent needless state over-regulation.”

The intent of Guthrie’s bill, Winters believes, is to send a signal to tech companies and open up the door to possible future legislation if the budget reconciliation bill fails to pass. It’s a trend consistent with Senator Cruz’s statement last week and efforts to remove red tape for data center projects on federal land.

“I’d describe this as.. explicitly saying we are supporting the AI companies more than the American people,” he said. “We’re seeing an explicit turn toward a deregulatory state.”

Federal lawmakers have steadily increased the number of bills they propose related to AI in recent years, but they have passed relatively few of them into law, according to the AI Index report. Out of more 220 bills proposed last year, only four passed.

By contrast, state lawmakers passed more than 130 bills to regulate AI last year. California passed 22 bills last year, more than any other state, and attempted to harmonize its rules with the European Union’s AI Act and other U.S. states. The 2024 State of State Tech Policy report found a 163% increase in tech policy proposals by state lawmakers last year compared to 2023. That trend is driven by one-party control in the vast majority of state houses across the country.

The adage that states are the laboratories of democracy is still true, said ​​Scott Brennen, a coauthor of the State of State Tech Policy report, so shutting down their ability to see what they can cover and try out different approaches doesn’t seem like a good idea, and could undercut the federal government’s ability to make better policy. Since AI is getting integrated into an ever-wider range of tools and platforms, Guthrie’s moratorium appears to apply widely, he added, including to social media platforms, ongoing efforts by states to protect children online, and data privacy protections that address automated decision making.

“I don’t necessarily think state regulation of AI is always the best course of action, there are definitely areas like consumer data protection where it would be better if the federal government took the lead, but the federal government isn’t taking the lead,” he said.