By Stephen DeAngelis
The Trump administration has made it very clear that it wants America to dominate the artificial intelligence sector. The administration fears that a hodgepodge of state laws regulating how AI is developed and used could stymie efforts to achieve AI dominance. With that goal in mind, President Trump signed an Executive Order on 11 December noting, “United States AI companies must be free to innovate without cumbersome regulation. But excessive State regulation thwarts this imperative. … State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups. … [And] state laws sometimes impermissibly regulate beyond State borders, impinging on interstate commerce.” The administration’s concerns aren’t unwarranted. Most large technology company executives agree with the administration’s position. They would prefer a single set of rules to which they must conform.
What’s the Problem?
Although the administration’s position makes sense, state lawmakers are frustrated with the lack of action in Washington, DC. Almost every state in the Union has passed AI legislation.[2] The only exceptions are Alaska, Ohio, and the District of Columbia. Just days before Trump signed his Executive Order, Florida’s “Governor Ron DeSantis announced a proposal to protect consumers by establishing an Artificial Intelligence Bill of Rights for citizens. Additionally, Governor DeSantis announced a proposal to protect Floridians from footing the bill for Hyperscale AI Data Centers and to empower local governments to reject their development in their communities.”[3] State lawmakers, worried that advanced AI systems without guardrails will harm their constituents, are acting at a time when Congress appears stalemated. Nevertheless, the Executive Order acknowledges that the responsibility to regulate AI rests with Congress. The Executive Order plainly states, “[The] Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.” It then adds, “Until such a national standard exists, however, it is imperative that [the] Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation.”
This action bumps up against another barrier: states’ rights. That’s one reason Congress, despite intense pressure from the White House, rejected adding language that would have preempted states from passing AI laws in the annual defense policy bill.[4] In order to give the Executive Order teeth, Trump ordered the Justice Department to challenge state AI laws "including on grounds that such laws unconstitutionally regulate interstate commerce." He also ordered the Commerce Department to withhold remaining Broadband Equity, Access and Deployment funding to expand internet access to any state not complying with the Order. The administration expects pushback from states led by Democrats. However, journalists Mackenzie Weinger and Maria Curi report Republican-led states may join the fight. They write, “Republican governors view the White House's approach as too broad and a giveaway to AI companies at the expense of states' rights. What's next: Expect legal challenges from states — and Republican infighting.”[5] Utah Governor Spencer Cox, a Republican, suggested on X, “An alternative AI executive order focused on human flourishing would strike the balance we need: safeguard our kids, preserve our values, and strengthen American competitiveness. States must help protect children and families while America accelerates its leadership in AI.”
Regulating Artificial Intelligence
Most tech executives agree that some AI regulation is required. Like most things, however, the devil is in the details. Regulating emerging technology is always difficult because technology is fast and the wheels of government and the wheels of the legal system are slow. That means technology will always race ahead of regulation. Nevertheless, governments have an obligation to act and, despite the pace of action, governments have been neither silent nor inactive on this subject.
Sacha Alanoca, a doctoral researcher at Stanford University and former John F Kennedy fellow at Harvard University, and Maroussia Lévesque, a doctoral researcher at Harvard Law School and an affiliate at the Harvard Berkman Klein Center for Internet and Society, suggest there have been two previous waves of AI regulation and that we are now experiencing a third wave. They explain, “The first wave of application-focused rules, in jurisdictions such as the EU, prioritized concerns such as discrimination, surveillance, environmental damage. The second wave of rules, by American and Chinese rivals, takes a national security mindset, focusing on maintaining military advantage and making sure malicious actors don’t use AI to gain nuclear weapons or spread fake news. A third wave of AI regulation is emerging as countries address societal and security concerns in tandem. Our research shows this hybrid approach works better, as it breaks down silos and avoids duplication.”[6]
From the perspective of AI tech companies, global regulations suffer from the same complexity problem as state-enacted regulations. A patchwork of regulations across the globe makes developing and using artificial intelligence very challenging. The problem, however, is that everyone from European Union lawmakers to American lawmakers to Chinese lawmakers want to establish the global regulatory standard. As journalist Elizabeth Gibney points out, “Despite risks ranging from exacerbating inequality to causing existential catastrophe, the world has yet to agree on regulations to govern artificial intelligence. Although a patchwork of national and regional regulations exists, for many countries binding rules are still being fleshed out.”[7] Only when national rules have been established will there be any chance of reconciling them on a global level.
What should AI regulations seek to do? That’s a big question with myriad answers. Ryan McBain, an assistant professor at Harvard Medical School and a senior policy researcher at RAND, insists, “Regulatory priorities should reflect the level of nuance of this new technology.”[8] This is especially true when it comes to protecting individual users, especially our youth and their use of chatbots. McBain states, “People will not stop asking chatbots sensitive questions. Policy should make those interactions safer and more useful.” How? He makes five suggestions. They are:
• “First, require standardized, clinician-anchored benchmarks for suicide-related prompts — with public reporting. Benchmarks should include multi-turn (back-and-forth) dialogues that supply enough context to test the sorts of nuances described above, in which chatbots can be coaxed across a red line.”
• “Second, strengthen crisis routing: with up-to-date 988 information, geolocated resources, and ‘support-plus-safety’ templates that validate individuals’ emotions, encourage help-seeking, and avoid detailed means of harm information.”
“Third, enforce privacy. Prohibit advertising and profiling around mental-health interactions, minimize data retention, and require a ‘transient memory’ mode for sensitive queries.”
• “Fourth, tie claims to evidence. If a model is marketed for mental health support, it should meet a duty-of-care standard — through pre-deployment evaluation, post-deployment monitoring, independent audits, and alignment with risk-management frameworks.”
• “Fifth, the administration should fund independent research through NIH and similar channels so safety tests keep pace with model updates.”
The article in which McBain makes his suggestions contains opinions from other experts as well about how to regulate AI.
Concluding Thoughts
Until Congress acts to regulate artificial intelligence — both protecting the rights of citizens as well as promoting the development of AI, state lawmakers will likely continue to press the issue. Although the President is well-intentioned with his Executive Order, he would be better served pressuring Congress than wasting resources trying to bring states into line. The fact that nearly every state has passed legislation indicates there is broad bipartisan support for AI regulation. Congress should make this a priority in the year ahead. Then the world needs to get serious about coming together. When creating regulations, Alanoca and Lévesque assert that lawmakers around the world should be more transparent in their efforts. They write, “The public deserves more transparency about how — and why — governments regulate AI. .... Recognizing the full spectrum of regulation, from export controls to trade policy, is the first step toward effective global cooperation. Without that clarity, the conversation on global AI governance will remain hollow.” Without meaningful AI regulations, the public will grow continually more skeptical about its benefits — and that would not be a good thing. The administration is on the right track making AI a national priority.
Footnotes
[1] President Donald J. Trump, “Ensuring a National Policy Framework for Artificial Intelligence,” 11 December 2025.
[2] AI Law Center, “U.S. AI Law Tracker,” Orrick, Herrington & Sutcliffe LLP.
[3] Staff, “Governor Ron DeSantis Announces Proposal for Citizen Bill of Rights for Artificial Intelligence,” Executive Office of the Governor, 4 December 2025.
[4] Maria Curi and Ashley Gold, “What Trump could do next on state AI laws,” Axios, 5 December 2025.
[5] Mackenzie Weinger and Maria Curi, “Trump signs executive order targeting state AI laws,” Axios, 11 December 2025.
[6] Sacha Alanoca and Maroussia Lévesque, “Don’t be fooled. The US is regulating AI – just not the way you think,” The Guardian, 23 October 2025.
[7] Elizabeth Gibney, “China wants to lead the world on AI regulation — will the plan work?” Nature, 1 December 2025.
[8] Sy Boles, “How to regulate AI,” The Harvard Gazette, 8 September 2025.





