Commissioned, Curated and Published by Russ. Researched and written with AI.


What’s New This Week

The White House released its national AI legislative framework on March 20, 2026. This post covers that document and what it means for engineers.


Changelog

DateSummary
23 Mar 2026Initial publication.

The White House published a national AI legislative framework on March 20, 2026. It is four pages long. It is addressed to Congress. It tells Congress what to build, not what you must do.

That distinction matters. This document has zero enforcement teeth today. What it does is set the direction of travel for US AI legislation, and based on it, some things engineers have been worried about are not coming – and some things that have been ignored may arrive faster than expected.

Here is what is actually in it, section by section.

What the Framework Actually Says

The document organises its recommendations across seven areas. No new regulator. No specific compliance requirements for AI developers today. What it asks Congress to do:

Child safety and digital access. Platforms likely to be accessed by minors should implement safeguards against sexual exploitation and self-harm. Age assurance requirements are explicitly mentioned. Parents should get tools to manage children’s privacy, screen time, and content exposure. Existing child privacy laws (think COPPA) should be clarified to apply to AI systems.

Communities and infrastructure. Ratepayers should not bear the cost of AI data center electricity consumption. Federal permitting for data center infrastructure should be streamlined so that facilities can generate power on-site. The federal government’s ability to combat AI-enabled scams should be strengthened, and national security agencies should have enhanced capability to assess frontier AI risks.

Intellectual property and creators. The administration’s stated position is that training AI models on copyrighted content does not violate copyright law. However – and this is the interesting bit – the document explicitly says Congress should not take any action that would interfere with ongoing judicial resolution of the fair use question. It supports exploring voluntary licensing and collective rights frameworks. It calls for explicit protections against unauthorised digital replicas of individuals’ voice, likeness, or other attributes.

Censorship and free speech. Congress should prohibit government coercion of platforms to moderate content based on political or ideological viewpoints. Mechanisms should be created for individuals to seek redress if federal actions censor lawful expression on AI systems.

Competitiveness. Regulatory sandboxes for AI development. Expanded access to federal datasets for industry and academia. Sector-specific regulators (financial, health, transport) handle AI in their own domains rather than a new central AI regulator. “Light-touch” regulation throughout.

Workforce and education. AI integration into education and workforce training programs. More research on AI-driven labour market impacts. Support for universities on skills development.

State law preemption. This is the most operationally significant item. The framework calls for federal law to preempt state AI laws that are “burdensome.” States retain authority over general law enforcement, zoning, and their own use of AI. States can keep laws protecting children, including CSAM legislation. But a California AI liability law, a New York algorithmic accountability rule, or a Texas AI transparency requirement? Those are the targets.

The legal analysis from Sullivan and Cromwell describes the framework as “a federally unified, innovation-oriented regime centered on preemption of state AI laws and a light-touch regulatory approach.” That is an accurate summary.

What It Means for Engineers Building AI Products

The honest translation: the near-term compliance burden from this framework is minimal, because there is no legislation yet.

But the direction is clear enough to act on.

State law preemption is the biggest near-term signal. If you are currently tracking state-level AI bills – and there are hundreds of them across the US – this framework is an attempt to kill that approach before it matures. If Congress acts, the patchwork of state laws that compliance teams have been dreading may not materialise. If Congress doesn’t act, those state laws keep developing independently. Plan for both paths.

The “no central AI regulator” position is explicit. The framework specifically says rely on sector-specific regulators and industry-led standards. If you are building healthcare AI, you are dealing with HHS/FDA. Financial AI? SEC/CFTC/CFPB. Defence? DoD. The regulatory jurisdiction for your product is not going to be a new AI agency – it will be whoever already regulates your sector. That is not new, but this framework cements it as the intended federal model.

Digital replica protections are coming. The framework is specific about this: protections against unauthorised use of individuals’ voice, likeness, or other attributes. If your product generates synthetic media of real people – even in B2B contexts like training data – there is legislation coming that targets this directly. In the live events and entertainment industry, artist likeness concerns make this particularly relevant.

Copyright on training data: the administration’s bet is “defer to courts.” The framework explicitly avoids legislative action on training data fair use, wanting courts to resolve it. This means no legislative clarity on the biggest IP question in AI development, potentially for years. If your model training depends on public web data, the legal exposure remains unresolved and you are operating on the assumption that fair use applies. That is probably fine near-term, but it is not settled law.

Child safety requirements are the most likely near-term legislation. Age assurance and content safeguard requirements for platforms accessible to minors are politically bipartisan. If Congress passes anything from this framework quickly, it is probably this section. If your product could be accessed by under-18s, you need to be thinking about age verification infrastructure now, not when the law passes.

What the Framework Doesn’t Cover

The list of absences is more informative than the list of proposals.

No liability framework for AI systems. Nothing about who is legally responsible when an AI system causes harm. No product liability extension to AI models. No safe harbour provisions. The EU AI Act’s risk-based liability approach – where certain high-risk applications carry strict obligations – does not have a US equivalent here and the framework does not ask Congress to create one.

No model auditing or evaluation requirements. The framework does not propose any requirement for AI developers to audit, red-team, or evaluate their models before deployment. No mandatory capability disclosures. No incident reporting. The voluntary commitments that frontier labs signed up to in 2023-2024 remain voluntary.

No deployment restrictions for high-risk applications. Healthcare decisions, hiring, criminal justice, lending – the EU AI Act designates these as high-risk categories with specific requirements. The US framework is silent on categorical restrictions for any application type.

No data governance for training sets. No transparency requirements about what data models are trained on. No opt-out rights for individuals whose data may appear in training sets. No disclosure obligations.

No compute thresholds or frontier model requirements. The EU’s approach includes obligations triggered by model capability thresholds. Nothing analogous here.

In short: what is proposed is infrastructure permitting, child safety, speech protections, workforce investment, and state law preemption. What is absent is anything that would require AI developers to change how they build, evaluate, or deploy models.

That is a deliberate choice, not an oversight.

Timeline and What to Watch

The framework tells Congress to act “this year.” That is more signal than the previous administration’s approach, but Congress has an extraordinary number of competing priorities in 2026 and AI legislation has been “coming soon” since 2023.

Realistic scenarios:

Most likely in 2026: Child safety legislation passes, possibly attached to a larger tech bill. Digital replica protections get some movement. Federal data center permitting reform has bipartisan appeal given energy infrastructure politics.

Possible but uncertain: State preemption passes. This requires Congress to actively strip authority from states, which generates significant political resistance even from legislators who agree with the policy goal.

Unlikely in 2026, possible by 2028: Any of the gaps identified above – liability, auditing, data governance, deployment restrictions. These require considerably more legislative heavy lifting and political will than this framework’s tone suggests is coming soon.

Wild card: If a significant AI-caused incident occurs – a deployed system causes substantial harm, a deepfake triggers a serious fraud – Congress can move fast. The absence of liability or audit requirements in this framework does not mean those cannot appear in emergency legislation.

The Senate Commerce Committee is the place to watch. That is where AI-related tech legislation tends to originate. Any bill that includes federal preemption of state laws will face resistance in committee regardless of the executive push.

What Engineering Teams Should Do Now

Do now, regardless of legislation:

  • Audit your product’s exposure to minors. If there is any realistic path to under-18 users, start mapping the age assurance and content moderation requirements that are likely coming.
  • Know your sector regulator. If you are building in healthcare, finance, or critical infrastructure, your AI compliance landscape is your sector regulator’s existing frameworks, now being applied to AI. Get current on those, not on AI-specific rules.
  • Document your training data provenance. Not because the law requires it today, but because it will be harder to reconstruct later and the copyright question will eventually resolve in a direction that requires it.
  • Map your state law exposure. The framework may or may not kill state AI laws. California’s CPPA has been active. Colorado passed AI rules. Do not wait for federal preemption to materialise before tracking what applies to you.

Wait and watch:

  • Liability architecture changes. Nothing in this framework requires you to redesign who bears responsibility in your AI stack. That conversation will come, but not from this document.
  • Specific technical requirements for model evaluation or auditing. Voluntary is still voluntary.

The framework is a direction-setter, not a rulebook. The US is choosing innovation speed over precautionary regulation as a deliberate national strategy. For engineering teams, that means fewer external constraints in the near term and less certainty about what constraints will eventually arrive.

Build things that can add audit trails, logging, and compliance controls later. You will probably need them. You just do not know exactly what form they will take yet.