Aperia

Artificial Intelligence Data Centers

03 April 2024

data center

Unlike traditional data centers that primarily focus on CPU compute power, the Artificial Intelligence (AI) revolution is driving a shift towards leveraging Graphic Processing Units (GPUs) for accelerated AI tasks. This transition has led to a significant increase in power consumption per rack, with typical ranges now reaching from 20 kilowatts to 40 kilowatts, and projections hinting at even higher demands in the near future, potentially surpassing 60 kilowatts per rack.

 

AI workloads demand specialized hardware deployed in high-density configurations with robust power and efficient cooling systems to ensure optimal performance. Consequently, data centers are upgrading their infrastructure, and in some cases, constructing entirely new facilities to meet the demands of an AI-ready environment.

 

Furthermore, the equipment itself must also be AI-ready. For instance, servers like Supermicro’s GPU SuperServer SYS-821GE-TNHR come in larger form factors, spanning 7 to 8 rack units and weighing over 100kg. Installing such servers requires specialized tools such as server lifts and proper transportation equipment like flatbed steel platform trolleys to ensure safety and efficiency.

Supermicro Server

Supermicro’s GPU SuperServer SYS-821GE-TNHR

More Blog Posts

Intro to AI

AI, machine learning, deep learning. All working hand in hand, part of a new era in technology to aid and assist many industries such as Automotive, Medical, Re...

Why GPU for Accelerated Computing

Before we get into why GPU is needed for accelerated computing. Let us understand how a CPU (Central Processing Unit) works. A CPU is the brain of the computer...