Key Points
- Nvidia invests $2 billion each in Lumentum and Coherent to advance optics for AI data centers.
- These partnerships aim to solve AI’s growing bandwidth and power bottlenecks through photonics.
- Emphasis on US manufacturing reduces supply chain risks amid geopolitical tensions.
Nvidia announced today that it is investing $2 billion into both Lumentum Holdings and Coherent, two major suppliers of optics tech, as part of a push to address the next major hurdle in AI infrastructure: the network fabric. While GPUs have dominated the headlines, the real bottleneck is shifting from compute to communication. Nvidia says advanced optical interconnects and photonics integration are now "foundational" for scaling AI clusters efficiently, reducing energy use and increasing speed.
The agreements are nonexclusive and include multi-billion-dollar purchase commitments, future rights to advanced laser components, and heavy investment in R&D operations. Critically, both companies will be expanding or building new manufacturing within the United States, a move Nvidia’s spokesperson framed as ensuring supply chain resilience and supporting domestic technology leadership in an era of intensifying global competition.
Brian Jackson, principal research director at Info-Tech Research Group, emphasized that Nvidia is betting on a major leap forward with photonics-based GPUs. While alternatives like Amazon and Google’s custom silicon are gaining traction due to their energy efficiency, Jackson believes Nvidia sees an opportunity to outpace competitors by embedding light-based data transfer directly into its chips. "If they can mass-manufacture a next-gen GPU that integrates photonics right into its silicon, they can solve two huge problems: power consumption and speed," he noted.
Sanchit Vir Gogia of Greyhound Research went further, arguing the move reveals a quiet but major industry admission: AI scaling is no longer a chip story. It’s a communication story. In large-scale AI clusters, thousands of high-speed links between accelerators create significant power, latency, and failure risks. Gogia said Nvidia is moving "upstream" to solve this—because even if you have unlimited GPUs, a sluggish, energy-hungry network chokes performance and ROI.
He stressed that the push for domestic manufacturing is not just political branding. With semiconductor supply chains increasingly tied to export controls and national policy, reliability and preferential supply allocation during shortages could favor vendors closely aligned with U.S. industrial strategy. This consideration could extend to foreign enterprises when planning procurement.
For IT leaders, Gogia advises that AI infrastructure planning must now include the fabric as a board-level strategic concern. Budget models should anticipate interconnect density growth, redundancy, energy efficiency per bit, and vendor concentration risks. Contracts should demand supply allocation rights and upgrade pathways. Governance must shift from server-centric to system-centric thinking, with AI ROI models reflecting how network performance impacts GPU utilization. As he put it, this is no longer a "network detail"—it’s the artery through which future AI growth flows.
Read the rest: Source Link
Don’t forget to check our list of Cheap Windows VPS Hosting providers, How to get Windows Server 2025, Try Windows 11 Pro for Workstations & browse Windows Azure content.
Remember to like our facebook and follow us on twitter @WindowsMode.
