What does the proposed suspension of national AI rules mean to you

States cannot enforce regulations on artificial intelligence technologies within a decade, according to plans that the U.S. House is considering. The legislation said in an amendment to the federal government's budget bill that no state or political breakdown “can enforce any laws or regulations that regulate AI models, AI systems or automated decision-making systems” for 10 years. The proposal still requires approval from Congress and President Donald Trump before it can become law. Voting for the entire budget package is expected this week.
AI developers and some lawmakers say federal actions are meant to prevent states from cobbled with different rules and regulations in the U.S. that could slow down technology growth. Since Chatgpt exploded on site in late 2022, the rapid growth of generative AI has allowed the company to adapt to the technology in as much space as possible. The economic impact is significant, as the U.S. and China race is to understand which country's technology will dominate, but the generated AI poses privacy, transparency and other risks to consumers that lawmakers are trying to reduce.
“As an industry and a country, as an industry and a country, both a clear federal standard,” Alexandr Wang, founder and CEO of Data Company Scale AI, told lawmakers at an April hearing. “But we need one, we need to be clear about a federal standard and have preemptive results to prevent you from having 50 different standards.”
Efforts to limit states’ ability to regulate artificial intelligence could mean consumer protection around a technology that increasingly penetrates every aspect of American life. “There is a lot of discussion at the state level, and I think it's important to solve this problem at multiple levels,” said Anjana Susarla, a professor of AI research at Michigan State University. “We can do it at the national level. We can do it at the state level. I think we need both.”
Several states have begun regulating AI
The proposed language will prohibit the enforcement of any regulations, including those already on books. Exceptions are rules and laws that enable AI development and the rules and laws that apply the same standards to non-AI models and systems that perform similar things. These regulations have begun to pop up. The biggest focus is not in the United States, but in Europe where the EU has implemented standards in AI. But the country began to take action.
Colorado passed a series of consumer protections last year, which will take effect in 2026. Last year, California adopted more than a dozen AI-related laws. Other state laws and regulations often deal with specific issues, such as Deepfakes or requiring AI developers to publish information about their training data. At the local level, certain regulations also address potential employment discrimination if AI systems are used.
“In the case where they want to regulate in AI, states are on the map,” said Arsen Kourinian, partner at law firm Mayer Brown. So far, state lawmakers have made at least 550 recommendations around AI in 2025, according to data from the national legislature. In a House Committee hearing last month, Rep. Jay Obernolte, a Republican from California, expressed hope to lead higher state regulations. “Our legislative runway is limited and we can solve this problem before states lead too far,” he said.
Although some states have laws on books, not all states have already entered into force or see any enforcement. This limits the potential short-term impact of the suspension, said Cobun Zweifel-Keegan, managing director of the Washington International Privacy Professionals Association. “No law enforcement is yet.”
Zweifel-Keegan said the suspension could prevent state lawmakers and policy makers from formulating and introducing new regulations. “The federal government will be the main and potentially sole regulator around AI systems,” he said.
What is the meaning of suspension of national artificial intelligence regulations
AI developers have asked for consistency and simplification to any guardrails placed on their work. At a Senate Commerce Committee hearing last week, Openai CEO Sam Altman told Senator Ted Cruz, a Republican from Texas, that the EU-style regulatory regime was “disastrous” for the industry. Altman advises the industry to develop its own standards.
Altman asked Hawaii Democrat Brian Schatz that if industry self-regulated enough, Altman said he thought certain guardrails would be good, but, “It’s too easy to go too far. Because I learned more about how the world works, I’m even more afraid it will go too far and it will have big consequences.” (Disclosure: Ziff Davis, parent company of CNET, filed a lawsuit against OpenAI in April, accusing Ziff Davis of infringing on Ziff Davis’ copyright in training and operating its AI system.)
Kourinian said the company’s concerns – concerns about developers building AI systems and “deployment” of interacting with consumers often stem from concerns that states will enforce important work, such as impact assessments or transparency notifications before launching products. Consumer advocates say more regulations are needed and hindering the country’s capabilities could undermine users’ privacy and security.
“AI is widely used to make decisions about people’s lives without transparency, accountability or recourse, which also promotes creepy fraud, imitation and surveillance,” Ben Winters, director of the American Consumer Federation of AI and Privacy, said in a statement. “A decade of pause will lead to more discrimination, more deception and less control – in short, it’s compared to tech companies and the people they influence.”
The Curinians say the suspension of rules and laws on specific states could lead to more consumer protection issues in court or state attorney general in court or state attorney general. Existing laws surrounding unfair and deceptive practices still apply. “Time will show how the judge will explain these issues,” he said.
The prevalence of AI across the industry means states may be able to regulate issues such as privacy and transparency more broadly without focusing on the technology, Susala said. However, the suspension of AI regulations could lead to such policies being bound in litigation. “There has to be some kind of balance between ‘we don’t want to stop innovating’, but on the other hand, we also need to recognize that there may be real consequences,” she said.
Zweifel-Keegan said many policies surrounding AI systems governance did happen because of the so-called technically injusticable rules and laws. “It is also worth remembering that there are many existing laws and it is possible that new laws will not trigger a suspension, but as long as they apply to other systems, they will apply to AI systems.”
Pause to attract opposition before House vote
House Democrats said the proposed moratorium would hinder states’ ability to protect consumers. Rep. Jan Schakowsky called the move “restrained” at a hearing on the AI Regulations Committee on Wednesday. “Our job now is to protect consumers,” said Democrats in Illinois.
Meanwhile, Republicans believe that state regulations may put too much burden on AI innovation. Rep. John Joyce, a Republican of Pennsylvania, said at the same hearing that Congress should establish a national regulatory framework rather than leaving it to states. “We need a federal approach to ensure consumers are protected when AI tools are abused and in ways that make innovators thrive.”
At the state level, a letter signed by the attorney generals of 40 states on both sides called on Congress to refuse a moratorium, instead establishing a broader regulatory system. “The bill does not propose any regulatory plan to replace or supplement the laws enacted or currently being considered by states, leaving Americans completely free from the potential harm of AI,” they wrote.