The landscape of artificial intelligence (AI) and its associated infrastructure is undergoing significant transformation, as highlighted by recent insights from industry leaders. The demand for larger, more efficient data centres to accommodate the exponential growth of AI technology is reshaping the physical and operational frameworks of business campuses and data management practices.
Gary Smith, the Chief Executive Officer of Ciena, a company known for its fibre-optic networking equipment tailored for cloud computing vendors, provided a glimpse into the size and scope of modern data centres in an interview with The Technology Letter last week. “Some of these large data centres are just mind-blowingly large, they are enormous,” Smith remarked, revealing that some facilities extend over two kilometres in length—equating to more than 1.24 miles. This expansion trend is not merely horizontal; many of these data facilities are multi-storey, compounding their spatial impact.
The physical increase in data centre size presents considerable challenges, particularly for corporate campuses that must manage increasingly dense configurations of graphical processing unit (GPU) clusters. “These campuses are getting bigger and longer,” Smith noted, pointing out that the traditional distinctions between wide-area networks and internal data centre operations are becoming blurred. As these expansive campuses develop, they place a growing burden on the direct-connect technologies needed to facilitate communication between GPUs, raising concerns over network efficiency and performance.
The implications of these changes are substantial, as Thomson Graham, co-founder of the chip startup Lightmatter, indicated during a Bloomberg Intelligence conference. Graham noted that there are currently at least a dozen AI data centres either planned or under construction that require a gigawatt of power to operate—“just for context, New York City pulls five gigawatts of power on an average day, so, multiple NYCs.” He projected that, by 2026, the total global power demand for AI processing would escalate to around 40 gigawatts, equivalent to the power needs of eight New York Cities.
Smith highlighted that emerging technologies will need to adapt to these new realities. The increasing distances between GPUs necessitate a shift from traditional fibre-optic links, typically reserved for long-distance telecommunications, to applications within cloud data centres. “Given the speed of the GPUs, and the distances that are now going on in these data centres,” Smith suggested, “we think there’s an intersect point for that [fibre optics] technology, and that’s what we’re focused on.”
These developments indicate a rapidly evolving relationship between AI technologies and the infrastructures that support them, as companies prepare for the demands of an increasingly digital and data-driven future. The augmentation of data centre capabilities reflects broader trends in AI automation, signalling significant changes in business practices and technological needs across the industry.
Source: Noah Wire Services