Main Points
- On March 20, the White House released the National AI Legislative Framework, offering a nonbinding legislative roadmap of President Trump’s policy priorities in shaping a federal framework for AI legislation. The latest framework was issued pursuant to President Trump’s December EO to establish a coordinated federal effort to block or override burdensome state AI regulations. By outlining policy goals for Congress, the framework seeks to operationalize the Trump administration’s goal of creating a federally centralized and innovation-focused framework for AI governance.
- This framework attempts to address the policy tension that we have noted throughout the last year of this newsletter: should AI governance be primarily federal, or continue to evolve through state-led initiatives and legislation?
- Senator Marsha Blackburn (R-TN) has taken a leading role in advancing this agenda, releasing a discussion draft, the TRUMP AMERICA AI Act, on March 18, that would codify elements of the December EO and align closely with the White House framework. The proposal incorporates provisions addressing online safety, intellectual property, and platform accountability, drawing on existing bipartisan measures such as child safety and digital likeness protections.
- Five days after President Trump released the AI Framework, Sen. Bernie Sanders (I-VT) and Rep. Alexandria Ocasio-Cortez (D-NY-14) introduced the AI Data Center Moratorium Act, which would impose a nationwide pause on the construction and expansion of AI data centers until Congress enacts comprehensive federal safeguards.
- Democratic lawmakers have continued to raise concerns that the current framework from the Trump administration prioritizes preemption without establishing sufficiently robust federal safeguards. Proposals such as H.R. 8031, which was introduced on March 20 in the House and which seeks to repeal the Trump administration’s preemption efforts on AI, underscore the extent of this opposition. Taken together, these dynamics suggest that, despite increasing legislative activity, there remains no clear consensus on either the scope or structure of federal AI regulation.
- Despite increased legislative activity, the path forward for federal AI regulation remains unclear due to competing proposals, jurisdictional complexity, and ongoing divisions, particularly over preemption, across and within both parties.
On March 20, the White House released the National AI Legislative Framework, offering a nonbinding legislative roadmap of President Trump’s policy priorities in shaping a federal framework for AI legislation. The latest framework was issued pursuant to the December Executive Order (EO) to establish a coordinated federal effort to block or override burdensome state AI regulations. We have written about President Trump’s December EO, titled Ensuring a National Policy Framework for Artificial Intelligence, which is designed to preempt much of the states’ authority over AI governance and constrain recent state-level efforts to regulate AI. The framework seeks to operationalize the Trump administration’s goal of creating a federally centralized and innovation-focused framework for AI governance.
The framework organizes its legislative agenda into seven overarching policy priorities:
- child protection and parental empowerment;
- development of secure and accessible AI infrastructure;
- preservation of intellectual property and creator interests;
- safeguards for free speech and against censorship;
- removal of barriers to innovation;
- investment in education and workforce preparedness; and
- establishment of a national policy framework, including federal preemption of certain state laws.
The framework calls for Congress to act in alignment with the Trump administration’s stated goals, and these seven policy priorities span multiple congressional policy areas, such as the House Energy and Commerce and Senate Commerce Committees, Judiciary Committees, Homeland Security Committees, and multiple federal agencies that have been named in previous EOs on AI as well as the White House AI Action Plan, including the Office of Science and Technology Policy, National Institute of Standards and Technology, Federal Trade Commission, Department of Energy, Department of Justice, and more.
The framework’s seven policy pillars collectively reflect an attempt to balance innovation policy with targeted risk mitigation while reinforcing a federal-first approach:
- Child safety congressional proposals to align with ongoing bipartisan efforts to expand protections for minors online, including legislation addressing platform design, privacy, and harmful content. The framework continues to include the carve-out for preserving states’ ability to enforce child protection laws.
- Infrastructure and energy provisions intersect with growing congressional scrutiny of AI data centers, including their impact on electricity demand, permitting, and grid reliability. President Trump notes his Ratepayer Protection Pledge that we’ve previously written about, and he directs Congress to act consistent with that approach.
- Intellectual property issues remain unsettled, with the framework largely deferring to courts while encouraging exploration of licensing and compensation mechanisms. It also directs Congress to refrain from taking “any actions that would impact the judiciary’s resolution of whether training on copyrighted material constitutes fair use.”
- Free speech provisions reflect continued focus from the Trump administration on government influence over platform moderation. It directs Congress to prevent federal agencies from “coercing technology providers, including AI providers” to moderate or alter content based on partisan or ideological considerations.
- Innovation policy emphasizes regulatory flexibility, including mechanisms such as sandboxes and expanded access to federal datasets.
- Workforce development builds on bipartisan interest in reskilling and AI education, particularly for small businesses and regional institutions.
- Preemption serves as the connective thread for the above six policy goals, positioning federal law as the primary vehicle for governing AI while preserving limited areas of state authority. The areas for states not being permitted to govern remain mostly general, as the framework says “states should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications,” echoing previous EOs’ language on “unduly burden[some]” state laws on AI.
This legislative framework attempts to address the policy tension that we have noted throughout the last year of this newsletter: should AI governance be primarily federal, or continue to evolve through state-led initiatives and legislation?
The Trump administration has continuously called for a coordinated federal effort to regulate AI, as it views state-level AI regulations as burdensome. After earlier efforts to incorporate AI preemption into must-pass legislation, such as the FY2026 National Defense Authorization Act and the 10-year moratorium on state AI laws as part of the One Big Beautiful Bill, were unsuccessful, encountering bipartisan resistance from key committee leadership, this framework is attempting to further reassert federal primacy absent congressional consensus. We have written more extensively on prior federal preemption efforts and strategic pivot to executive action in our December issue of this newsletter.
In the absence of federal legislation, states have advanced a growing number of AI-related laws, with jurisdictions such as California, New York, and Colorado emerging as early movers in establishing compliance regimes. The resulting patchwork has become a central justification for federal intervention by the Trump administration.
The framework also arrives alongside two legislative developments:
- Marsha Blackburn’s draft legislation to codify the administration’s AI agenda, and
- a newly introduced bill from Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez seeking to halt AI data center expansion and resist federal preemption.
It is unclear whether these two bills will become the primary vehicle for any congressional AI effort. There is likely still a substantial fight ahead over the future structure of US AI regulation, as Democrats’ view on AI governance in Washington aligns more closely with the direction that many states are already taking, while many Republicans continue to push for a nationally uniform and less prescriptive framework grounded in federal preemption and reduced regulatory fragmentation.
Senator Blackburn has taken a leading role in advancing this agenda, releasing a discussion draft, the TRUMP AMERICA AI Act, on March 18, that would codify elements of the December Executive Order and align closely with the White House framework. The proposal incorporates provisions addressing online safety, intellectual property, and platform accountability, drawing on existing bipartisan measures such as child safety and digital likeness protections.
Like the White House framework, the draft legislation from Senator Blackburn seeks to establish a single federal “rulebook” for AI, grounded in both innovation and user protection. Senator Blackburn’s discussion draft’s elements include:
At the same time, Democratic lawmakers, particularly those serving on committees of jurisdictions called out by the administration’s AI framework, have expressed reservations about the administration’s approach. Members of the House Democratic Commission on AI and the Innovation Economy have raised concerns that the framework emphasizes preemption without sufficiently pairing it with enforceable national safeguards. Critiques have focused on the absence of robust guardrails related to safety, labor impacts, and consumer protection.
Five days after President Trump released the AI framework, Senator Sanders and Representative Ocasio-Cortez introduced the AI Data Center Moratorium Act, which would impose a nationwide pause on the construction and expansion of AI data centers until Congress enacts comprehensive federal safeguards. The proposal conditions further AI development on the establishment of standards addressing system safety, labor market impacts, and the environmental and energy demands associated with large-scale computing infrastructure, while also contemplating restrictions on the export of AI infrastructure to jurisdictions lacking comparable protections. In doing so, the legislation adopts a precautionary, “regulate first” approach that stands in direct tension with the White House’s emphasis on accelerating deployment alongside targeted guardrails.
The bill also reframes AI governance as an issue of physical infrastructure and resource allocation, elevating concerns around electricity consumption, community impact, and cost-shifting to ratepayers, areas that the White House framework seeks to facilitate through permitting reform and infrastructure investment. More broadly, the proposal underscores continued Democratic resistance to federal preemption absent robust national standards, reinforcing a key divide in the current policy landscape: whether AI development should proceed in parallel with regulation or be temporarily constrained until a comprehensive federal regime is in place.
The path forward in Congress remains highly uncertain. In the Senate, questions persist regarding how competing approaches will be reconciled, particularly given Senate Commerce Chair Sen. Ted Cruz’s prior involvement in advancing preemption proposals and reported differences over strategy within the Republican caucus. In the House, parallel efforts led by lawmakers such as Rep. Jay Obernolte leading the House Task Force on AI suggest that multiple legislative tracks are developing simultaneously, further complicating the prospects for a unified approach. While Senator Blackburn’s draft legislation represents an attempt to translate the administration’s priorities into statutory language, she has also acknowledged the practical need to focus on measures capable of clearing both chambers in Congress. Meanwhile, Democratic lawmakers have continued to raise concerns that the current framework from the Trump administration prioritizes preemption without establishing sufficiently robust federal safeguards. Proposals such as H.R. 8031, introduced on March 20 in the House, which seeks to repeal the Trump administration’s preemption efforts on AI, underscore the extent of this opposition. Taken together, these dynamics suggest that, despite increasing legislative activity, there remains no clear consensus on either the scope or structure of federal AI regulation.
We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions about current practices or how to proceed.

