Aperia

Artificial Intelligence Data Centers

data center

Unlike traditional data centers that primarily focus on CPU compute power, the AI revolution is driving a shift towards leveraging Graphic Processing Units (GPUs) for accelerated AI tasks. This transition has led to a significant increase in power consumption per rack, with typical ranges now reaching from 20 kilowatts to 40 kilowatts, and projections hinting at even higher demands in the near future, potentially surpassing 60 kilowatts per rack. AI workloads demand specialized hardware deployed in high-density configurations with robust power and efficient cooling systems to ensure optimal performance. Consequently, data centers are upgrading their infrastructure, and in some cases, constructing entirely new facilities to meet the demands of an AI-ready environment. Furthermore, the equipment itself must also be AI-ready. For instance, servers like Supermicro’s GPU SuperServer SYS-821GE-TNHR come in larger form factors, spanning 7 to 8 rack units and weighing over 100kg. Installing such servers requires specialized tools such as server lifts and proper transportation equipment like flatbed steel platform trolleys to ensure safety and efficiency.

AI: In the pipeline

ai pipeline

Every wonder how AI are made. In this article let’s look into what is in the pipeline in creating an AI. There are 3 main phases in the AI pipeline. Data manage…

Why GPU for Accelerated Computing

gpu-computing

Before we get into why GPU is needed for accelerated computing. Let us understand how a CPU (Central Processing Unit) works. A CPU is the brain of the computer….

Intro to AI

intro to ai

AI, machine learning, deep learning. All working hand in hand, part of a new era in technology to aid and assist many industries such as Automotive, Medical, Re…