AI News

News Published on: Oct 24, 2025. 2:03 AM · ravenith

Data Center Redesigns Essential as AI Rack Weights Increase

The future of AI infrastructure is set to be shaped by liquid cooling, increased rack densities, and significant collaboration between chipmakers and data center operators, according to senior executives from Nvidia, Schneider Electric, and Sweden’s EcoDataCentre at an industry forum this week.

Vladimir Pradanovic, Nvidia’s Head of Data Centre Readiness, highlighted at the Schneider Electric Innovation Summit that the next generation of AI platforms will elevate data-center power densities to unprecedented levels. He noted expectations to reach one megawatt per rack by 2027.

The rapid increase in performance presents substantial engineering and logistical challenges. The liquid-cooled GB200 racks weigh approximately 1.6 tonnes when empty, rising to over 2 tonnes when filled with coolant. Transporting such hardware requires specialized freight solutions.

EcoDataCentre is positioning itself as one of Europe’s few purpose-built operators for high-performance computing. The company constructs its data-center structures from cross-laminated timber, significantly reducing carbon emissions. It also invests heavily in local suppliers and uses sustainable energy sources.

Schneider Electric emphasized the transformation in data-center design philosophy driven by the AI boom. Power distribution is evolving, with Nvidia and partners developing 800-volt direct-current systems to enhance efficiency and reduce cabling bulk.

The economic rationale is clear. A single day of downtime on a 4,000-GPU cluster can result in approximately $300,000 in lost revenue. Nvidia is employing large-language-model techniques to optimize AI infrastructure, ensuring every component performs flawlessly.