How the proposed moratorium on national AI rules affects you

Congress will impose brakes on any state rules and laws for artificial intelligence in a large federal spending bill ahead of the U.S. Senate. Supporters say such a move will help the industry grow and compete with AI developers in China, while critics say anyone except the federal government will be guardrailing around a fast becoming a major component of our lives.
The proposal says no state or political segment “can enforce any laws or regulations that regulate AI models, AI systems or automated decision-making systems” for 10 years. In May, the House narrowly voted in favor of ratifying the full budget bill, which also included extending the 2017 federal tax break and cutting services such as Medicaid and SNAP.
AI developers and some lawmakers say federal actions are meant to prevent states from cobbled with different rules and regulations in the U.S. that could slow down technology growth. Since Openai’s Chatgpt exploded on site in late 2022, the rapid growth of generative AI has allowed the company to adapt to the technology in as much space as possible. The economic impact is significant, as the U.S. and China race is to understand which country's technology will dominate, but the generated AI poses privacy, transparency and other risks to consumers that lawmakers are trying to reduce.
“As an industry and a country, as an industry and a country, both a clear federal standard,” Alexandr Wang, founder and CEO of Data Company Scale AI, told lawmakers at an April hearing. “But we need one, we need to be clear about a federal standard and have preemptive results to prevent you from having 50 different standards.”
However, not all AI companies are supporting the suspension. In a New York Times column, human CEO Dario Amodei called it a “too dull tool” and said the federal government should create transparency standards for AI companies. “Having this national transparency standard will not only help the public, but Congress understands how the technology is developed so that lawmakers can decide whether further government action is needed.”
Efforts to limit states’ ability to regulate artificial intelligence could mean consumer protection around a technology that increasingly penetrates every aspect of American life. “There is a lot of discussion at the state level, and I think it's important to solve this problem at multiple levels,” said Anjana Susarla, a professor of AI research at Michigan State University. “We can do it at the national level. We can do it at the state level. I think we need both.”
Several states have begun regulating AI
The proposed language will prohibit the enforcement of any regulations, including those already on books. Exceptions are rules and laws that enable AI development and the rules and laws that apply the same standards to non-AI models and systems that perform similar things. These regulations have begun to pop up. The biggest focus is not in the United States, but in Europe where the EU has implemented standards in AI. But the country began to take action.
Colorado passed a series of consumer protections last year, which will take effect in 2026. Last year, California adopted more than a dozen AI-related laws. Other state laws and regulations often deal with specific issues, such as Deepfakes or requiring AI developers to publish information about their training data. At the local level, certain regulations also address potential employment discrimination if AI systems are used.
“In the case where they want to regulate in AI, the states are on the map,” said Arsen Kourinian, a partner at Meyer Brown Law Firm. So far, in 2025, state lawmakers have made at least 550 recommendations around AI, according to data from the national legislature. In a House Committee hearing last month, Rep. Jay Obernolte, a Republican from California, expressed hope to lead higher state regulations. “Our legislative runway is limited and we can solve this problem before states lead too far,” he said.
Although some states have laws on books, not all states have already entered into force or see any enforcement. This limits the potential short-term impact of the suspension, said Cobun Zweifel-Keegan, managing director of the Washington International Privacy Professionals Association. “No law enforcement is yet.”
Zweifel-Keegan said the suspension could prevent state lawmakers and policy makers from formulating and introducing new regulations. “The federal government will be the main and potentially sole regulator around AI systems,” he said.
What is the meaning of suspension of national artificial intelligence regulations
AI developers have asked for consistency and simplification to any guardrails placed on their work. At a Senate Commerce Committee hearing last week, Openai CEO Sam Altman told Senator Ted Cruz, a Republican from Texas, that the EU-style regulatory regime was “disastrous” for the industry. Altman advises the industry to develop its own standards.
Altman asked Brian Schatz, a Hawaii Democrat, who asked if the industry self-regulated enough, Altman said he thought some guardrails would be good, but “too easy to go too far. Because I learned more about how the world works, I was even more afraid that it would go too far and that would have great consequences.” (Disclosure: Ziff Davis, the parent company of CNET, filed a lawsuit against OpenAI in April, accusing Ziff Davis of infringing on Ziff Davis’ copyright in training and operating its AI systems.)
Kourinian said the company’s concerns, the “deployment” of developers who create AI systems and interact with consumers, often stem from concerns that states will enforce important work, such as impact assessments or transparency notifications, before launching products. Consumer advocates say more regulations are needed and hindering the country’s capabilities could undermine users’ privacy and security.
“AI is widely used to make decisions about people’s lives without transparency, accountability or recourse, which also promotes creepy fraud, imitation and surveillance,” Ben Winters, director of the American Consumer Federation of AI and Privacy, said in a statement. “A decade of pause will lead to more discrimination, more deception and less control – in short, it’s compared to tech companies and the people they influence.”
The Curinians say the suspension of rules and laws on specific states could lead to more consumer protection issues in court or state attorney general in court or state attorney general. Existing laws surrounding unfair and deceptive practices still apply. “Time will show how the judge will explain these issues,” he said.
The prevalence of AI across the industry means states may be able to regulate issues such as privacy and transparency more broadly without focusing on the technology, Susala said. However, the suspension of AI regulations could lead to such policies being bound in litigation. “There has to be some kind of balance between ‘we don’t want to stop innovating’, but on the other hand, we also need to recognize that there may be real consequences,” she said.
Zweifel-Keegan said many policies surrounding AI systems governance did happen because of the so-called technically injusticable rules and laws. “It is also worth remembering that there are many existing laws and it is possible that new laws will not trigger a suspension, but as long as they apply to other systems, they will apply to AI systems.”
The proposed 10-year moratorium on national AI laws are now under the control of the Senate, which has held hearings on AI.
AI debate is handed over to the Senate
As the bill is now in the hands of the U.S. Senate, and as more people realize the proposal, debates on suspending debate have been proposed. Senators, including Republican Josh Hawley and Marsha Blackburn, expressed concern. In the Senate, the measure can be removed from the budget due to the so-called Bird Rule, which prohibits anything that is not a budget issue.
Subsequently, any bill approved by the Senate must also be accepted by the House, where it passes with the narrowest profits. Even some House members who voted for the bill said they didn’t like the suspension, Rep. Marjorie Taylor Greene, a key ally of President Trump. The Georgia Republican posted on X this week that she “stubbornly opposed” the suspension and she would not vote on the suspension measures as a bill.
At the state level, a letter signed by the attorney generals of 40 states on both sides called on Congress to refuse a moratorium, instead establishing a broader regulatory system. “The bill does not propose any regulatory plan to replace or supplement the laws enacted or currently being considered by states, leaving Americans completely free from the potential harm of AI,” they wrote.