A sweeping legislative package backed by President Donald Trump, dubbed the “Big Beautiful Bill,” has passed the House by a razor-thin margin and now heads to the Senate — with one of its most contentious provisions aiming to halt state-level regulation of artificial intelligence (AI). The bill, exceeding 1,000 pages, includes a $500 million investment over the next decade to modernise federal systems with AI and automation. But buried within is a sweeping federal preemption clause that would ban states from enacting or enforcing their own AI regulations — a move that has sparked concern among lawmakers and advocacy groups alike.
AI regulation in crossroads
The proposed ban would effectively dismantle a growing patchwork of state-led AI laws, blocking both existing and future regulations. At least 45 states, plus Puerto Rico and Washington, D.C., introduced AI bills in 2024, with over 30 enacting oversight or regulatory frameworks. States like Utah, Maryland, and Florida have already passed comprehensive AI oversight acts. Republicans pushing the bill argue a unified federal approach is necessary to avoid conflicting rules and to spur innovation. However, the absence of a current federal AI law has prompted many states to take matters into their own hands, something this bill would reverse.
GOP Senators Marsha Blackburn of Tennessee and Josh Hawley of Missouri voiced strong reservations about stripping states of regulatory power. “We certainly know that in Tennessee, we need those protections,” Blackburn said during a May 21 Senate hearing on AI impersonation. “Until we pass something that is federally preemptive, we can not call for a moratorium.” Hawley echoed the concerns in a May 13 interview with Business Insider, invoking federalist principles: “Just as a matter of federalism, we would want states to be able to try out different regimes… I do think we need some sensible oversight that will protect people’s liberties.”
Tech industry leaders and the U.S. Chamber of Commerce have long argued that state-led AI laws pose a threat to American innovation. Sean Heather, Senior VP at the Chamber, testified during the House subcommittee hearing on May 21 that an inconsistent regulatory landscape is dangerous for development. “We should not be in a rush to regulate,” Heather said. “We need to get it right, therefore taking a time out to discuss it at a federal level is important.”
However, civil rights and tech policy groups warn that an AI moratorium could delay necessary protections. Organisations like the Center on Privacy and Technology at Georgetown, the Innocence Project, and the National Union of Healthcare Workers argue that AI is already being misused — from disinformation campaigns and deepfakes to biased algorithms in policing and hiring. “AI could soon be used by state and non-state actors to develop dangerous weapons, increase surveillance, and magnify existing biases,” said the California Initiative for Technology and Democracy (CITED) in a January 2024 report.
The proposal signals a sharp shift from President Joe Biden’s now-defunct “AI Bill of Rights,” which encouraged state-level engagement and ethical guidelines. The Trump-led framework, by contrast, emphasises AI acceleration and minimal oversight, aligning more closely with Silicon Valley’s concerns over regulation stifling growth. With only a slim Republican majority in the Senate and ongoing infighting over other controversial components — including Medicaid cuts and tax reform — the bill’s path to becoming law remains uncertain. GOP leaders hope to finalise the legislation by July to avert a potential debt default, but bipartisan resistance to the AI provision may force revisions.