GIGABYTE Introduces Direct Liquid Cooled Servers Supercharged by NVIDIA for Both Baseboard Accelerators and CPUs
GIGABYTE Technology introduced two new liquid cooled HPC and AI training servers, G262-ZL0 and G492-ZL2, that can push the NVIDIA HGX™ A100 accelerators and AMD EPYC™ 7003 processors to the limit with enterprise-grade liquid cooling. To prevent overheating and server downtime in a compute dense data center, GIGABYTE worked with CoolIT Systems to develop a thermal solution that uses direct-liquid cooling to balance optimal performance, high availability, and efficient cooling. For innovators and researchers in HPC, AI, and data analytics that demand a high level of CPU and GPU compute the new servers are built for the top-tier AMD EPYC 7003 processor and GPU baseboard, NVIDIA HGX A100 80GB accelerator. Combining components well-designed for performance and efficiency enables much faster insights and results, which users appreciate while reaping the benefits of the value and lower TCO.
The inclusion and choices of the NVIDIA HGX A100 platform in the new GIGABYTE servers is important, in that new NVIDIA Magnum IO™ GPUDirect technologies favor faster throughput while offloading workloads from the CPU to achieve notable performance boosts. The HGX platform supports NVIDIA GPUDirect RDMA for direct data exchange between GPUs and third-party devices such as NICs or storage adapters. And there is support for GPUDirect Storage for a direct data path to move data from storage to GPU memory while offloading the CPU, thus resulting in higher bandwidth and lower latency. For high-speed interconnects the four NVIDIA A100 server incorporates NVIDIA NVLink®, while the eight NVIDIA A100 server uses NVSwitch™ and NVLink to enable 600GB/s GPU peer-to-peer communication.
The G262-ZL0 is a 2U GPU-centric server supporting the NVIDIA HGX A100 4-GPU baseboard, while the bigger sibling is the G492-ZL2, a 4U GPU-centric server with the NVIDIA HGX A100 8-GPU baseboard. These new servers join the existing line of G262 and G492 servers that use conventional heatsinks and high airflow fans to now include direct liquid cooling solutions. Notably, these new servers isolate the GPU baseboard from the other components, so the accelerators are cooled by a liquid coolant to maintain peak performance. The other chamber houses the CPUs, RAM, storage, and expansion slots. The dual CPU sockets in these servers are also liquid cooled. Besides processing power, the servers have multiple 2.5" U.2 bays that support PCIe 4.0 x4 lanes and multiple PCIe slots for faster networking using a SmartNIC such as the NVIDIA ConnectX®-7 for four ports of connectivity and up to 400Gb/s of throughput.
Interested buyers can contact GIGABYTE directly to purchase either of the two servers, and then for all the questions about how to integrate the cooling structure into the data center and what additional cooling components are needed customers can contact CoolIT Systems. CoolIT Systems has approved service providers worldwide to handle planning, installation, and maintenance, as well as offers Cooling Distribution Units (CDU), rack & chassis manifolds, rear door heat exchangers, and secondary fluid networks (SFN) to fit from a single rack to a complete data center using facility water.
Remote and Multiple Server Management:
As part of GIGABYTE’s value proposition, GIGABYTE provides GIGABYTE Management Console (GMC) for BMC server management via a web browser-based platform. Additionally, GIGABYTE Server Management (GSM) software is available for download on product pages. This software can monitor and manage multiple servers without requiring an additional license fee. GMC and GSM offer great value while reducing TCO and customer maintenance costs.
Learn more about GIGABYTE servers: https://www.gigabyte.com/Enterprise
For further enquiries or assistance: server.grp@gigabyte.com
Follow GIGABYTE on Facebook: facebook.com/gigabyteserver
Follow GIGABYTE on Twitter: twitter.com/GIGABYTEServer