Key points
- Microsoft is investing tens of billions of dollars in purpose-built datacenters and infrastructure to support the global adoption of cutting-edge AI workloads and cloud services.
- The company has introduced its newest US AI datacenter, Fairwater, in Wisconsin, which is the largest and most sophisticated AI factory built yet, and will be connected to other datacenters around the world through Azure’s global network.
- These AI datacenters will be powered by NVIDIA GPUs and will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen.
According to sources, Microsoft has announced a wave of investments in purpose-built datacenters and infrastructure around the world to support the global adoption of cutting-edge AI workloads and cloud services. The company has introduced its newest US AI datacenter, Fairwater, in Wisconsin, which is the largest and most sophisticated AI factory built yet. This datacenter will be connected to other datacenters around the world through Azure’s global network, enabling the creation of a giant AI supercomputer.
The Fairwater datacenter is a remarkable feat of engineering, covering 315 acres and housing three massive buildings with a combined 1.2 million square feet under roofs. It will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen. The datacenter is powered by NVIDIA GPUs, which are mounted on server boards alongside CPUs, memory, and storage. This enables the creation of a tightly coupled cluster, where hundreds of thousands of accelerators can train a single model in parallel.
Microsoft is also investing in other locations, including Narvik, Norway, where the company has announced plans to develop a new hyperscale AI datacenter with nScale and Aker JV. In Loughton, UK, Microsoft has announced a partnership with nScale to build the UK’s largest supercomputer to support services in the UK. These investments demonstrate Microsoft’s commitment to supporting the growth of AI and cloud services around the world.
The company is also addressing the environmental impact of its datacenters, using closed-loop liquid cooling to reduce water waste and increase efficiency. This system uses integrated pipes to circulate cold liquid directly into servers, extracting heat efficiently, and ensures zero water waste. Microsoft is also using liquid cooling to support AI workloads in many of its existing datacenters, which is accomplished with Heat Exchanger Units (HXUs) that operate with zero operational water use.
Microsoft’s Azure storage has been reengineered for the most demanding AI workloads, with the ability to sustain over 2 million read/write transactions per second. The company has also developed BlobFuse2, which delivers high-throughput, low-latency access for GPU node-local training, ensuring that compute resources are never idle and that massive AI training datasets are always available when needed.
The company’s AI WAN is built with growth capabilities in AI-native bandwidth scales to enable large-scale distributed training across multiple, geographically diverse Azure regions. This enables customers to harness the power of a giant AI supercomputer, with greater resiliency, scalability, and flexibility. As Microsoft continues to invest in its datacenter infrastructure, the company is poised to play a critical role in the future of AI, with a focus on real technology, real investment, and real community impact.
Read the rest: Source Link
You might also like: Why Choose Azure Managed Applications for Your Business & How to download Azure Data Studio.
Remember to like our facebook and our twitter @WindowsMode for a chance to win a free Surface every month.