20 August 2025, Geneva
The White House has moved aggressively to position the United States at the forefront of artificial intelligence development and deployment. With the release of the "America’s AI Action Plan" and a trio of supporting executive orders, the Trump Administration is attempting to fuse deregulation, infrastructure expansion, and techno‑diplomacy into a coherent national AI strategy. The stakes are global: decisions on procurement, export, and research funding now ripple outward to shape transatlantic alignment, the balance of competition with China, and the innovation capacity of the broader democratic world.
The Trump Administration’s “America’s AI Action Plan” outlines three pillars: Accelerating innovation, Building AI infrastructure, and Leading in international AI diplomacy and security.
It directs 90+ federal actions: deregulation, open-source/open-weight support, export promotion, and national-security evaluation of frontier models.
Three Executive Orders operationalize the plan: (1) “Preventing Woke AI” (neutrality mandate for LLMs in federal procurement), (2) “Promoting Export of the American AI Stack,” and (3) education and workforce development (EOs 14277 & 14278).
New developments highlight contradictions: the USAi platform launched with $1 contracts for major vendors, sweeping science research budget cuts threaten the innovation pipeline, and controversial AI chip exports to China raise strategic doubts.
1. USAi Platform Launch
GSA’s USAi platform went live, offering secure use of ChatGPT, Claude, Gemini, and Meta’s AI tools across government.
Contracts with OpenAI and Anthropic are priced at $1 in the first year, accelerating adoption but raising competitiveness concerns for smaller vendors.
2. Research Funding Cuts
The administration announced cuts across NIH, NSF, DARPA, and NASA, undermining the R&D base needed to sustain AI innovation.
Universities and researchers warn of a “hollowing out” of the pipeline that has traditionally underpinned U.S. technology leadership.
3. Chip Export Deal with China
Trump authorized Nvidia and AMD to resume AI chip exports to China under a new revenue-sharing scheme (15% to U.S.).
Critics argue this undermines export-control credibility, creates constitutional risks, and weakens the U.S. strategic hand.
Procurement as Policy Lever: The neutrality mandate in “Preventing Woke AI” will influence vendor development strategies. OMB’s ability to implement technically feasible, legally defensible standards is the hinge point.
Techno-Statecraft in Export Policy: Pairing export promotion with new controls was designed to lock allies into the U.S. stack. The chip-export deal with China risks unraveling this logic and alienating allies.
Open-Source vs. Security: Open-weight model promotion accelerates competition but complicates security oversight. CAISI’s evaluation mission will face immediate stress testing.
Research Undercuts: Cuts to core science agencies directly undermine the stated goal of innovation leadership. Long-term risks may outweigh near-term fiscal savings.
Biosecurity Shift: Mandatory nucleic-acid screening for synthesis providers marks a significant policy turn from voluntary to compulsory safeguards.
Political Risk Surface: Civil liberties and research groups are mobilizing against procurement politicization. The China chip deal risks domestic legal challenge and reputational fallout abroad.
The EU’s AI Act is phasing in across 2025–2026, with general‑purpose AI (GPAI) transparency and governance obligations applying in 2025 and high‑risk obligations following in 2026. The European AI Office is coordinating enforcement and guidance.
The EU Data Act becomes applicable in September 2025, tightening rules on access to and sharing of industrial/IoT data across borders.
The EU Chips Act targets a substantial increase in Europe’s share of advanced semiconductors by 2030, while national export controls (e.g., in the Netherlands) continue to constrain advanced tool sales to China.
Regulatory Divergence vs. Convergence: The U.S. procurement-driven “neutrality” push diverges from the EU’s fundamental-rights and risk-based approach. Multinationals operating on both sides will need dual compliance roadmaps (OMB procurement tests vs. EU AI Act conformity assessments, GPAI transparency, and high‑risk obligations).
Data & Cloud Sovereignty: USAi’s rapid adoption may tilt U.S. agencies toward specific U.S. model stacks. EU public‑sector adoption will be shaped by AI Act obligations, GDPR, the Data Act’s data‑access rules, and national cloud‑sovereignty requirements, favoring EU‑hosted or EU‑compliant deployments.
Compute & Infrastructure: Europe’s sovereign compute push (EuroHPC exascale) helps close capability gaps, but power‑price volatility and grid constraints may slow data‑center build‑out relative to the U.S.
Export‑Control Geometry: If Washington loosens practical chip flows to China while Europe tightens tool exports, EU firms (e.g., semiconductor equipment makers) face competitive and diplomatic cross‑pressures.
Transatlantic Standards: U.S. national-security evaluations (CAISI) and EU notified-body conformity assessments will shape parallel assurance ecosystems; opportunities exist for mutual recognition around safety benchmarks and biosecurity screening norms.
Compliance Burden for SMEs: Early GPAI and high‑risk obligations could strain mid‑market providers and public agencies without sufficient guidance or tooling.
Vendor Lock‑In & Fragmentation: Divergent U.S./EU rules risk bifurcated model offerings, complicating cross‑border workflows and increasing switching costs.
Supply‑Chain Exposure: Continued restrictions on advanced lithography/DUV tools to China could trigger retaliatory measures or lost sales, even as U.S. chip vendors re‑enter the Chinese market under new terms.
Power & Cost Headwinds: Rising electricity demand and grid constraints raise TCO for AI infrastructure, potentially slowing European adoption compared with the U.S.
First‑Mover Governance Advantage: Clear, staged obligations (AI Act + Data Act) can anchor trusted‑AI exports and certification services.
Sovereign Compute & Open-Weight Models: Leveraging EuroHPC capacity and open-weight ecosystems can reduce dependency on a narrow set of U.S. vendors and enable sector-specific fine-tuning under EU rules.
Transatlantic Alignment Windows: Engage on shared evaluation standards (biosecurity, critical‑infrastructure misuse, cybersecurity) to reduce duplicative testing and accelerate safe deployment.
OMB guidance on AI procurement neutrality (due November 2025).
Commerce consortia calls and the first “priority AI export packages.”
USAi adoption metrics: which agencies pilot, vendor participation, and system security.
CAISI’s first public model evaluations.
Export-control updates on semiconductor manufacturing subsystems.
Research fallout: university partnerships, grants, and workforce impact.
Legal and political challenges to chip export arrangements.
EU response: alignment or divergence from U.S. approach, especially around AI Act enforcement and semiconductor strategy.
Rapid AI adoption across federal agencies (via USAi).
New workforce pipelines from EOs 14277 & 14278.
Permitting reforms for data centers and fabs.
Potential for the EU to capture displaced U.S. research talent.
Procurement politicization and legal challenges.
Strategic incoherence between export promotion and China chip deals.
Friction with allies on export frameworks.
Long-term erosion of the U.S. innovation ecosystem.
Transatlantic regulatory fragmentation and weakened EU autonomy.
The Trump Administration’s AI Action Plan signals a decisive bid to define the rules of engagement in the global AI race. Its mix of deregulation, procurement mandates, and export strategy aims to consolidate U.S. leadership, but contradictions—such as science budget cuts and chip deals with China—undermine the coherence of the vision. For Europe and allies, the coming months will test whether this approach fosters transatlantic alignment or accelerates fragmentation. The outcome will shape not only the future of U.S. innovation but also the trajectory of global governance in artificial intelligence.
Prepared by:
ISRS Strategic Advisory & Risk Analysis Unit
Geneva, Switzerland
About ISRS
The Institute for Strategic Risk and Security (ISRS) is an independent, non-profit NGO focusing on global risk and security.
Copyright (c) 2025, Institute for Strategic Risk and Security