The Stargate Project is a large-scale US compute infrastructure initiative announced in 2024 designed to build foundational computing capacity for artificial intelligence and machine learning applications. The project represents a multi-year investment in distributed data center infrastructure across multiple US locations, with deployment targets extending through the late 2020s.
The Stargate Project encompasses a coordinated buildout of compute infrastructure surveyed at 7 strategic sites across the United States. The initiative targets capacity expansion to 9+ gigawatts (GW) of power consumption by 2029, a scale comparable to the peak electricity demand of New York City (approximately 14 GW) 1). This represents one of the largest coordinated infrastructure development efforts in the computing industry, reflecting the substantial computational requirements for training and deploying advanced AI systems.
The project is framed as foundational infrastructure for what stakeholders describe as a “compute-powered economy,” indicating expectations that AI and machine learning applications will become increasingly central to economic activity 2). The infrastructure buildout involves substantial capital investment and multi-year development timelines, suggesting long-term commitment to compute capacity expansion.
The Stargate Project's distributed architecture across 7 surveyed sites suggests a strategy of geographic diversification for redundancy, resilience, and regional resource optimization. Large-scale data center infrastructure requires consideration of power supply availability, cooling capacity, fiber optic connectivity, and land availability. The targeting of 9+ GW by 2029 indicates annual infrastructure additions of approximately 1.5-2 GW per year, representing sustained capital commitment over the multi-year deployment window.
Compute infrastructure of this scale typically incorporates GPU and TPU clusters optimized for neural network training and inference, with architectural considerations for distributed training across multiple facilities. The power consumption targets align with the electrical requirements of high-performance AI hardware deployment, including graphics processing units (GPUs) and application-specific integrated circuits (ASICs) used in machine learning workloads.
The Stargate Project reflects industry recognition that advanced AI system development requires substantial, dedicated compute infrastructure. The scale of the initiative—comparable to major city infrastructure—indicates expectations that AI and machine learning will drive significant future economic demand for computing resources 3).
Such infrastructure initiatives typically involve coordination between technology companies, infrastructure providers, and power utilities. The multi-site deployment strategy suggests considerations for geographic distribution of computational load, latency optimization for different application types, and resilience against localized infrastructure failures. The public announcement of specific power targets indicates effort to establish credibility regarding deployment timelines and capacity commitments.
As of April 2026, the project is reported to be “on track” for achieving the 9+ GW target by 2029 4). The multi-year development timeline extends beyond the current date, with substantial construction and deployment activities continuing through the remainder of the decade. Progress updates from the project would be expected to track facility openings, power capacity additions, and infrastructure deployment milestones.