Technology company Microsoft announced a new wave of purpose-built datacenters and infrastructure investments worldwide to accelerate the adoption of advanced AI workloads and cloud services. The company unveiled its latest US AI datacenter, described as the largest and most advanced AI facility it has constructed to date. Alongside the Fairwater datacenter in Wisconsin, several additional Fairwater facilities are under development across different US locations.
Internationally, Microsoft revealed plans in Narvik, Norway, in collaboration with nScale and Aker JV, to establish a hyperscale AI datacenter. In the United Kingdom, the company announced a partnership with nScale to build what is set to be the country’s largest supercomputer, based in Loughton, to support domestic services.
These AI datacenters represent multibillion-dollar capital projects equipped with hundreds of thousands of high-performance AI chips. They will integrate seamlessly with the Microsoft Cloud network, which spans more than 400 datacenters across 70 global regions. By interlinking these new AI facilities into a distributed system, Microsoft aims to multiply efficiency and computing capacity, further expanding access to AI services on a global scale.
AI datacenters are specialized facilities designed for training and deploying large-scale AI models and applications. Microsoft’s AI datacenters support workloads such as OpenAI models, Microsoft AI, Copilot features, and other advanced AI systems. The newly completed Fairwater datacenter in Wisconsin illustrates the scale of these projects, occupying 315 acres and comprising three large buildings with a total of 1.2 million square feet of space. Its construction required extensive infrastructure, including 46.6 miles of foundation piles, 26.5 million pounds of structural steel, 120 miles of underground medium-voltage cable, and 72.6 miles of mechanical piping.
Unlike traditional cloud datacenters, which typically run numerous smaller workloads like websites, email, or business applications, Microsoft’s new facility is engineered to operate as a massive AI supercomputer. Featuring a flat networking architecture interconnecting hundreds of thousands of NVIDIA GPUs, it is expected to deliver ten times the performance of today’s fastest supercomputer, enabling AI training and inference at an unprecedented scale.
Microsoft highlighted the role of purpose-built infrastructure in efficiently powering advanced AI models at the trillion-parameter scale. At the core of its AI datacenters are GPU accelerators integrated with CPUs, memory, and storage, organized into racks and connected through low-latency networking. While this setup appears as independent servers, at scale it operates as a single supercomputer capable of training massive models in parallel.
The Wisconsin facility runs a unified cluster of NVIDIA GB200 servers with millions of compute cores and exabytes of storage, enabling processing speeds of up to 865,000 tokens per second—currently the highest cloud throughput available. Each rack contains 72 Blackwell GPUs connected through NVLink, pooling 14 terabytes of memory per GPU. Future datacenters in Norway and the UK will adopt similar clusters using NVIDIA’s next-generation GB300 chips with expanded memory capacity.
In order to maintain efficiency at supercomputing scale, Microsoft deploys advanced networking that allows GPUs within and across racks to communicate at terabytes per second without congestion. Multiple pods of racks are then linked to function as one global-scale supercomputer. The Wisconsin datacenter uses a two-story configuration to minimize latency from physical distance, further improving connectivity.
Through this layered design, co-engineered with industry partners, Microsoft Azure has established what it describes as the world’s most powerful purpose-built AI supercomputer, designed to support frontier models and large-scale AI workloads.
Microsoft’s new AI datacenters form part of a globally interconnected Azure network, linked through a wide area system designed to function as a single, large-scale AI supercomputer. By distributing compute, storage, and networking resources across multiple regions, the infrastructure delivers greater scalability, resilience, and flexibility than a single facility could provide.
This shift required a complete redesign of the cloud stack, aligning hardware and software across silicon, servers, networks, and datacenters into a unified, purpose-built system. The Wisconsin datacenter plays a central role in this vision, combining advanced technology and investment while contributing to regional development. Integrated with other facilities worldwide, it supports a new phase of secure and adaptive cloud-based AI.
The post Microsoft Brings Fairwater GB200 Supercomputer Online, Delivering 10x Performance Over Top500 Systems appeared first on Metaverse Post.
Also read: NVIDIA India GDP Growth News: Market Cap Surpasses Nation Economy