Politic Connectz

President Trump Signs Executive Order Challenging State AI Laws

Share:

On December 11, 2025, the White House issued a widely anticipated executive order, “Ensuring a National Policy Framework for Artificial Intelligence” (the Executive Order), intending to weaken state-level regulations of artificial intelligence through a combination of targeted litigation led by the Department of Justice, administrative reinterpretation of existing laws, conditional federal funding and the preemption of existing state laws through a federal policy framework.

This Executive Order follows a clear pattern in the Trump Administration’s AI policy of seeking to limit state-level regulation and consolidate authority at the federal level. For example, the Trump Administration had previously pursued legislative preemption earlier this year through the proposed One Big Beautiful Bill Act, which included a 10-year moratorium on new state AI regulations. Although it passed in the House, the moratorium was rejected by the Senate largely due to bipartisan concerns regarding the erosion of traditional state authority over consumer protection and laws protecting artists and entertainers. The Trump Administration’s July 2025 AI Action Plan foreshadowed this action, calling for a national, innovation-focused AI framework and warning that state regimes create regulatory “fragmentation.” Many of the directives in the Executive Order closely align with and operationalize the priorities outlined in the Action Plan. Taken together, these initiatives reflect the Trump Administration’s strategy to limit the scope of state AI rules and promote a uniform AI governance framework set by the federal government.

The Executive Order differs slightly from a draft leaked in November. While the draft explicitly cited California’s SB 53 and characterized state regulations as “fear-based” or ideologically driven, the final text replaces these references with somewhat softer language concerning the economic inefficiencies of a regulatory patchwork. The final text is also narrowed as it expressly prohibits the federal preemption of otherwise lawful state AI laws, including those relating to child safety, AI compute and data center infrastructure (except for generally applicable permitting reforms), state government procurement and use of AI, and other topics as later determined. These changes temper the Executive Order’s tone, reduce the risk of broad or implied preemption, and expressly preserve traditional areas of state authority, such as child safety.

AI Litigation Task Force: The Executive Order establishes an AI Litigation Task Force within the Department of Justice, which beginning January 10, 2026, will be responsible for challenging state AI laws in federal court on the grounds that they unconstitutionally burden interstate commerce, are preempted by federal regulations, or are otherwise unlawful in the Attorney General’s judgment. The primary legal theory underpinning these challenges will likely be the Dormant Commerce Clause, which prohibits states from enacting legislation that places an undue burden on interstate commerce. The Trump Administration’s position is that, because frontier AI models are developed and deployed by companies operating on a global scale, a patchwork of differing state regulations creates insurmountable barriers to national deployment, therefore undermining U.S. competitiveness. However, the strength of this argument is difficult to predict as the Dormant Commerce Clause gives the courts wide discretion in weighing the local benefits of a state law against its burden on the national economy.

Evaluation of State AI Laws: The Executive Order directs the Secretary of Commerce to publish, by March 11, 2026, a comprehensive review of existing state AI laws, identifying those deemed overly burdensome or in conflict with the federal policy outlined in the Executive Order (the Policy), particularly laws that require AI systems to alter “truthful outputs” or mandate disclosures that may violate the First Amendment. The review must also flag state laws appropriate for referral to the new AI Litigation Task Force and may highlight state laws that support AI innovation in line with federal objectives.

Restrictions on State Funding: The Executive Order instructs the Department of Commerce to condition $42 billion in previously allocated broadband infrastructure funding appropriated under the Broadband Equity, Access and Deployment (BEAD) program on the repeal of state AI regulations deemed onerous. The Executive Order uses federal funding as leverage to limit state AI regulation by authorizing federal agencies to condition discretionary grants on states refraining from enacting, or agreeing not to enforce, AI laws deemed inconsistent with the Executive Order’s policy or otherwise identified as conflicting or subject to challenge.

Preemption of State Laws Mandating Deceptive Conduct in AI Models: Additionally, the Executive Order directs the Federal Trade Commission (FTC) to issue a policy statement by March 11, 2026, classifying state-mandated bias mitigation as a per se deceptive trade practice. This directive stems from the AI Action Plan, which prioritized preventing the imposition of ideological constraints on AI development. The Trump Administration’s legal theory posits that if an AI model is trained on data reflecting societal patterns, forcing developers to alter the model’s outputs to mitigate bias compels them to produce results that are less faithful to the underlying data. Under this interpretation, such mitigation renders the model less “truthful” and, therefore, deceptive. Policy statements are interpretive rather than binding regulations, and courts may reject the premise that correcting for bias constitutes deception. We note that the Executive Order includes no standards on data sourcing or data normalization, therefore effectively placing an even stronger onus on model developers to focus on data curation.

Federal Legislation: The Executive Order directs Special Advisor for AI and Crypto David Sacks and Assistant to the President for Science and Technology Michael Kratsios to draft legislative recommendations for a uniform federal AI framework that would preempt conflicting state laws, while expressly preserving state authority over child-safety protections, data center and compute infrastructure, state government AI procurement and other designated areas later determined. It also instructs the Federal Communication Commission (FCC), within 90 days of the Department of Commerce’s state-law evaluation, to consider establishing a federal reporting and disclosure standard for AI models that would similarly supersede inconsistent state requirements. Historically, the FCC has viewed general AI governance as beyond its jurisdiction, interpreting the Communications Act as covering the physical infrastructure of transmission, rather than the software applications using it.

Several categories of state AI laws that we believe are potentially vulnerable to review under the Executive Order are those that impose transparency, reporting, documentation or safety-testing requirements on developers and deployers. This includes laws such as Colorado’s AI Act (which is directly named in the Executive Order), California’s SB 53 (the Frontier Model Safety and Transparency Act) and California’s AB 2013, which require training data disclosures. The Executive Order refers to these state regulations not merely as burdensome, but as “legally deceptive.” The Executive Order also puts at risk state rules that require explanations of algorithms or mandate independent audits, such as California’s CCPA automated decision making regulations or New York City’s Local Law 144. This creates a direct conflict because many of these state laws, which the Executive Order says are inconsistent with the Policy, are already in effect or coming into effect soon.

What to Expect: The immediate consequence of the Executive Order is legal ambiguity. We anticipate that the validity of targeted state laws will likely be determined through prolonged litigation that could reach the Supreme Court, where the power of the executive and the strength of the Dormant Commerce Clause will be tested. Regardless of what the courts decide, the Executive Order is part of a larger trend toward tempered regulation of AI. For example, the Trump Administration’s strategy coincides with similar developments in the European Union. In November, the European Commission proposed delaying the implementation of high-risk obligations under the EU AI Act from 2026 to 2027. By raising the financial and legal costs and uncertainty associated with enacting and defending state AI laws, the Executive Order may create a deterrent effect that discourages state legislatures from pursuing new regulations, regardless of what the courts decide.

However, we do not believe this Executive Order will eliminate state involvement in AI regulation altogether. Instead, we think that states will diffuse AI regulation by applying existing consumer protection, unfair competition, deceptive practices and civil rights laws to AI-related conduct. The Trump Administration’s case for federal preemption will be more relevant if Congress were to enact a comprehensive federal AI framework. In the interim, we expect increased enforcement activity from federal agencies such as the FTC, FCC and the Equal Employment Opportunity Commission, particularly against tech companies or AI deployers that the Trump Administration believes are engaging in what it considers to be unlawful bias or the abridgment of free speech.

Although the regulatory landscape is uncertain, companies should continue to comply with applicable state AI laws because the Executive Order itself does not, and cannot, overturn existing state law — that can only be done by an act of Congress or the courts. Until the relevant legal challenges are resolved, state laws remain enforceable, and companies could face potential penalties for noncompliance.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *