The White House Just Told Congress
How to Regulate AI. Here’s What It Actually Says.
On March 20, 2026, the White House Office of Science and Technology Policy (OSTP) released a 27-recommendation policy document called the National Policy Framework for Artificial Intelligence. OSTP Director Michael Kratsios and then-AI/Crypto Czar David Sacks co-authored it. The document is not a law, not an executive order, and not a regulation. It is a set of legislative recommendations addressed to Congress — a blueprint the administration wants lawmakers to enact before the end of 2026.
The framework spans seven policy areas: child safety, communities, creators, censorship, competitiveness, workforce, and federal preemption of state AI laws. Its central argument is that the United States cannot win the global AI race if 50 states each impose 50 different regulatory regimes on AI developers and deployers. The administration wants Congress to enact a single national standard — and to preempt state AI laws that create what Sacks called “a patchwork difficult for innovators.”
The release coincided with a significant personnel transition: Sacks left his formal role as AI/Crypto Czar after 130 days and moved to co-chair the President’s Council of Advisors on Science and Technology (PCAST) alongside Kratsios — a council whose inaugural membership includes Jensen Huang (Nvidia), Mark Zuckerberg (Meta), Larry Ellison (Oracle), and Sergey Brin (Google). The framework is the administration’s first comprehensive statement of how it believes AI should be governed in America.
- 27legislative recommendationsacross 7 policy areas — child safety, communities, creators, censorship, competitiveness, workforce, and preemption
- EO 14365parent executive ordersigned December 2025 — directed OSTP to prepare this federal AI regulatory framework and evaluate existing state AI laws
- 13PCAST inaugural membersJensen Huang, Mark Zuckerberg, Larry Ellison, Sergey Brin, and 9 others — advisory panel co-chaired by Sacks and Kratsios
- 50-statepatchwork the framework targetsColorado, California, Texas, Utah, and dozens more have enacted AI-specific statutes — framework would preempt those that burden AI development
The framework carries the title National Policy Framework for Artificial Intelligence: Legislative Recommendations. The word “legislative” is load-bearing. This is not a regulation. No agency can enforce it. No court can apply it. It tells Congress what the executive branch wants enacted — and it signals to industry what the administration will support and what it will fight.
The document was required under Executive Order 14365, signed in December 2025, which directed OSTP and the Office of Management and Budget to produce a national AI framework proposal and evaluate existing state AI laws for federal preemption. EO 14365 itself stopped short of preempting any state law — it directed the preparation of recommendations for Congress to do so. This document is that product.
Kratsios framed the goal directly: the administration wants Congress to pass legislation before the end of 2026. House Speaker Mike Johnson (R-LA), Rep. Steve Scalise (R-LA), Rep. Jim Jordan (R-OH), and Sen. Ted Cruz issued statements pledging to work with the framework. Senate Commerce Committee chair Sen. Maria Cantwell signaled openness. On the same day the framework was released, Rep. Donald Beyer (D-VA) and four Democratic colleagues introduced the GUARDRAILS Act to block federal preemption of state AI laws entirely.
“The White House's national AI legislative framework will unleash American ingenuity to win the global AI race, delivering breakthroughs that create jobs, lower costs, and improve lives for Americans across the country.”
Michael Kratsios — OSTP Director · March 20, 2026 · on releasing the framework
The framework’s 27 recommendations are organized under seven headings. Below is the complete structure with every recommendation as drawn from the primary document and corroborated across multiple law-firm analyses of the official PDF.
- Build on Take It Down Act protections for non-consensual intimate imagery.
- Clarify COPPA application to AI systems collecting children's data.
- Require AI platforms to implement safety features protecting minors from exploitation and self-harm.
- Establish age-assurance mechanisms via platform attestation.
- Empower parents with account controls and parental-consent tools.
- Strengthen enforcement against AI-enabled fraud, particularly targeting seniors.
- Protect electricity ratepayers from data center cost pass-throughs — ratepayer protection pledge.
- Streamline federal permitting for AI data center infrastructure.
- Support small business AI implementation through grants and tax incentives.
- Ensure national security agencies have sufficient technical capacity to evaluate frontier AI risks.
- Allow courts — not legislation — to resolve whether AI training on copyrighted material violates copyright law; avoid prejudging the outcome.
- Create voluntary collective-licensing frameworks enabling creators to negotiate compensation from AI providers without antitrust liability.
- Protect individuals from unauthorized digital replicas of their likeness, voice, or image.
- Prohibit government coercion of AI platforms over content moderation decisions.
- Establish redress mechanisms for individuals subject to government-directed AI censorship actions.
- Establish regulatory sandboxes allowing companies to test AI products before compliance obligations attach.
- Make federal datasets AI-accessible to industry and academia.
- Avoid creating a new federal AI regulatory body — rely on existing sector-specific regulators (FDA, DOT, FTC, etc.).
- Promote U.S. AI exports through full-stack export packages backed by Commerce Department financing.
- Rely on industry standards and self-regulatory frameworks where possible.
- Integrate AI training into existing federal education and workforce programs.
- Expand land-grant university capacity in AI research and development.
- Study AI-driven task-level workforce shifts — identify which specific job tasks are at risk.
- Establish a unified national AI policy that supersedes conflicting state AI laws.
- Preempt state laws that impose undue burdens on AI development, including bans on particular AI use-case verticals.
- Preserve core state authority over generally applicable consumer protection, child safety, procurement, and zoning laws — preemption is targeted, not total.
The preemption section is the most consequential and contested part of the framework. As of March 2026, more than a dozen states have enacted AI-specific statutes: Colorado has algorithmic accountability requirements, California has passed multiple AI bills covering hiring, healthcare, and deepfakes, Texas has its own framework, and Utah has enacted AI disclosure rules. The framework treats this legal landscape as a problem to be solved, not a feature to be preserved.
The preemption proposal is not a blanket override of all state authority. The framework explicitly carves out state enforcement of generally applicable consumer protection laws, child safety statutes, government procurement rules, and zoning authority. What it targets are AI-specific state laws that go beyond those generally applicable frameworks — laws that, in the administration’s view, “ban particular verticals” or “impose undue burdens on AI development.”
“We believe that we can't have preemption without other stuff attached to it, and it has to be a give and take with both.”
Michael Kratsios — OSTP Director · Nextgov/FCW interview · March 2026
Administration case for preemption: A patchwork of 50 state AI laws creates compliance costs that fall disproportionately on startups, not large incumbents. The U.S. cannot develop and export AI at the pace required to outrun China if developers must staff 50-state compliance operations before shipping. A single national standard is how the U.S. built a unified national market for the internet.
State-authority case against broad preemption:Federal AI legislation does not yet exist, which means federal preemption of state laws creates a regulatory vacuum, not uniformity. Democratic opponents (GUARDRAILS Act sponsors) argue that preempting existing state consumer and worker protections before a federal replacement is in place leaves citizens with no protection at all. Sen. Marsha Blackburn’s competing draft includes stricter child protections and developer accountability that go beyond the framework’s recommendations.
What both sides agree on:The status quo of fragmented state-by-state regulation is not stable. Congress will act in some form; the fight is over whether the floor is set at the administration’s minimums or higher.
The framework was released on the same week David Sacks formally concluded his 130-day tenure as the White House AI and Crypto Czar. Sacks did not leave; he changed roles. President Trump appointed him co-chair of the President’s Council of Advisors on Science and Technology (PCAST), alongside Kratsios. The significance of the transition is structural: as AI Czar, Sacks held a formal White House staff role with direct operational authority. PCAST is an external advisory body — it studies issues, makes recommendations, and advises the president, but does not issue orders or direct agencies.
I am honored and grateful to be appointed by President Trump to the President's Council of Advisors on Science and Technology (PCAST) and to be named Co-Chair along with OSTP Director Michael Kratsios. PCAST is the principal body of external advisors tasked with shaping science, technology, and innovation policy for the President and the White House. Thirteen of the world's most accomplished leaders in science and technology will join us as this PCAST's initial members. Together we will make policy recommendations to ensure that America leads — and wins — in artificial intelligence and other cutting-edge technologies.
Sacks described the expanded scope himself: as PCAST co-chair, he “can now make recommendations on not just AI but an expanded range of technology topics.” The PCAST mandate covers AI, semiconductors, quantum computing, and nuclear power. The council’s near-term priority, confirmed by TechCrunch, is advancing the national AI framework just released through Congress. The roster of 13 inaugural PCAST members — including Jensen Huang, Zuckerberg, Ellison, Brin, and AMD CEO Lisa Su — gives the advisory body unusual gravitational pull for what is technically a non-governmental advisory role.
The framework’s creators section contains the most politically unusual position in the document: the administration explicitly states it believes AI training on copyrighted material “does not violate copyright laws,”but simultaneously recommends that Congress not legislate that conclusion — instead allowing courts to settle the question through litigation. The reasoning is that premature legislation could “impact the fair use judicial resolution,” which the administration appears to want resolved in industry’s favor by precedent rather than statute.
The framework’s compensatory mechanism for creators is the voluntary collective licensing framework— a proposal allowing creators to pool bargaining power and negotiate compensation from AI providers without triggering antitrust liability. Whether voluntary frameworks will attract the major AI labs, who benefit from the status quo, is an open question the framework does not answer. On digital replicas, the framework backs statutory protection against unauthorized use of a person’s voice, likeness, or image — an area where several states have already acted and the federal gap is widely acknowledged.
The framework does not name the EU AI Act explicitly, but the contrast is constant and deliberate. The EU AI Act, which entered force in August 2024 and began full enforcement in August 2026, classifies AI applications into risk tiers (unacceptable, high, limited, minimal) and imposes compliance requirements by risk level — using what Kratsios described at CSIS as “antiquated high-risk/low-risk categories.” The EU approach creates a horizontal AI regulator. The U.S. framework explicitly recommends not creating a new federal AI regulatory body — relying instead on existing sector-specific regulators (the FDA for medical AI, the DOT for autonomous vehicles, the FTC for consumer protection).
Regulatory model: EU AI Act = horizontal risk classification + dedicated AI enforcement authority. U.S. framework = sector-specific regulators; no new AI agency.
Binding vs. advisory: EU AI Act is enacted law with penalties up to 3% of global revenue. The U.S. document is a set of non-binding legislative recommendations; no obligation attaches until Congress acts.
State preemption vs. member-state floor: The EU act sets minimum standards member states can exceed. The U.S. framework asks Congress to preempt state laws that exceed (or conflict with) the national minimum — structurally the opposite direction.
Innovation posture: EU AI Act prohibits specific high-risk applications outright (e.g., real-time remote biometric identification in public spaces). U.S. framework recommends regulatory sandboxes that permit testing applications before compliance obligations attach.
Kratsios on the EU approach: “The EU stuff is particularly disappointing.” He cited the EU’s use of “coercive tactics” and “antiquated” risk categories as contrasts to the U.S. preference for use-case-specific, industry-standard approaches.
Republican leaders in both chambers issued statements of support within hours of the framework’s release. House Speaker Johnson and Rep. Babin (House Science Committee) pledged to work with OSTP on legislation. Sen. Cruz, chair of the Commerce Committee’s relevant subcommittee, aligned with the preemption posture. The reception on the Republican side was not universal: Sen. Marsha Blackburn (R-TN) has circulated a competing draft with stricter child protections and developer accountability standards that exceed what the White House framework recommends.
Democratic opposition coalesced around preserving state authority. On March 20, Rep. Beyer (D-VA), Rep. Doris Matsui (D-CA), Rep. Ted Lieu (D-CA), Rep. Sara Jacobs (D-CA), and Rep. April McClain Delaney (D-MD) introduced the Guaranteeing and Upholding Americans’ Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act. The GUARDRAILS Act would repeal EO 14365 and effectively block any federal moratorium on state AI regulation. Passage of the GUARDRAILS Act in a Republican-controlled Congress is not expected.
December 2025: EO 14365 signed — directs OSTP to develop national AI framework and evaluate state laws.
March 20, 2026: Framework published. Republican leaders pledge support. GUARDRAILS Act introduced by five Democrats on the same day.
March 25, 2026: Sacks formally transitions to PCAST co-chair. PCAST roster of 13 tech leaders announced (Huang, Zuckerberg, Ellison, Brin, Su, others).
Target: Kratsios publicly stated the goal is enactment “this year” — before the end of 2026.
Pending: Congressional committee markups not yet scheduled as of the framework release date; specific legislative vehicles not yet announced.
The Lawfare analysis of the framework catalogues its coverage gaps, and they are significant. The 27 recommendations address seven of the most politically salient areas — but they omit whole domains that courts, regulators, and technologists treat as central to AI governance:
Federal AI procurement standards. The U.S. government is one of the world’s largest AI buyers. No recommendations govern how federal agencies evaluate, procure, or deploy AI systems in their own operations.
Content authentication. The framework contains no recommendations on provenance, watermarking, or AI-content labeling — the technical layer that would allow users to identify AI-generated media.
Adult data privacy. The children’s recommendations are detailed; protections for adult users are largely absent.
Autonomous weapons. Defense AI — including lethal autonomous weapons systems — is not addressed, despite being a category of frontier AI risk the IAEA, NIST, and major research institutions treat as high-priority.
Semiconductor export controls. The framework is silent on chip and compute export policy — a domain where the administration has active Commerce Department proceedings.
Algorithmic accountability. No recommendation covers disparate-impact requirements, audit obligations, or explainability standards for high-stakes AI decisions in employment, housing, credit, or healthcare — the domain most actively legislated at the state level.
The gaps are not accidental. The administration’s stated preference is to leave algorithmic accountability and high-stakes AI deployment regulation to existing sector regulators — the FDA for medical AI, the CFPB for credit AI — rather than creating a new federal AI agency or a horizontal compliance requirement. Critics argue this leaves exactly the domains where existing regulators lack AI expertise most exposed to harms without specific safeguards.
The White House released a 27-recommendation blueprint for how Congress should regulate AI. It is not a law. It is a signal. The signal is clear: the administration wants a single national AI standard that preempts the patchwork of state laws, blocks a new federal AI regulator, protects children online, defers the copyright training question to courts, and keeps the U.S. posture as far from the EU AI Act as possible. David Sacks called it “a patchwork difficult for innovators.” Kratsios called it the framework that will “unleash American ingenuity to win the global AI race.” Democrats introduced the GUARDRAILS Act the same day to block it. Congress has its marching orders. Whether it marches is the next question.
Tier 1: White House OSTP primary document (PDF) and official White House release. Tier 2: CNBC, Axios, US News. Tier 3: Law firm client alerts (Sullivan & Cromwell, Holland & Knight, WilmerHale, Lawfare, Paul Hastings, Reed Smith) used for structured enumeration of the 27 recommendations — cross-referenced against the primary PDF for accuracy. PCAST sourcing: David Sacks X post (ID 2036837128601063927), TechCrunch, Fox Business, Winbuzzer. Quotes from Kratsios sourced to CSIS interview transcript and Nextgov/FCW. The 27 recommendations total was confirmed across multiple law-firm client alerts citing the official document. This framework is not legally binding; no claim is made that any recommendation has been enacted into law.
- 01White House OSTP — National Policy Framework for Artificial Intelligence: Legislative Recommendations (March 20, 2026)
- 02White House — President Donald J. Trump Unveils National AI Legislative Framework (March 20, 2026)
- 03David Sacks on X — PCAST appointment, co-chair with Michael Kratsios, March 25, 2026
- 04TechCrunch — David Sacks is done as AI czar — here's what he's doing instead (March 26, 2026)
- 05CNBC — Trump administration unveils national AI policy framework to limit state power (March 20, 2026)
- 06Nextgov/FCW — White House official advocates for 'give and take' on state AI preemption (March 2026)
- 07Axios — Exclusive: Kratsios says Trump AI push won't raise power bills (March 25, 2026)
- 08Axios — David Sacks drops 'AI czar' label, not policy influence (March 26, 2026)
- 09CSIS — Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios
- 10Sullivan & Cromwell — Trump Administration Releases National Policy Framework on Artificial Intelligence (March 2026)
- 11Lawfare — White House AI Framework Proposes Industry-Friendly Legislation (2026)
- 12Holland & Knight — White House Releases a National Policy Framework for Artificial Intelligence (March 2026)
- 13WilmerHale — White House Releases National Policy Framework for Artificial Intelligence (March 23, 2026)
- 14Winbuzzer — Trump Names Zuckerberg, Huang, Brin to PCAST Technology Advisory Council (March 25, 2026)
- 15Fox Business — Trump names David Sacks co-chair of tech advisory council, expanding AI, crypto role (March 2026)
- 16CSET Georgetown — Trump's Plan for AI: Recapping the White House's AI Action Plan
- 17Paul Hastings — President Trump Signs Executive Order Challenging State AI Laws (EO 14365)
- 18AIP.ORG — Kratsios Calls on Congress to Back Federal AI Strategy
- 19US News & World Report — Trump Releases AI Policy for Congress to Pre-Empt State Rules (March 20, 2026)
- 20Reed Smith — Decoding the 2026 White House AI Blueprint: U.S. AI Policy Starts to Take Shape