NVIDIA Brings CUDA to Arm, DGX SuperPOD Supercomputer Aims that Self-Driving Cars
NVIDIA unveiled the DGX SuperPOD, world’s 22nd fastest supercomputer, and announced the company its support for Arm CPUs in a new path to build AI-enabled exascale supercomputers.
DGX SuperPOD
The NVIDIA DGX SuperPOD supecomputer provides AI infrastructure that meets the massive demands of the company’s autonomous-vehicle deployment program.
NVIDIA says it built the system in three weeks with 96 NVIDIA DGX-2H supercomputers and Mellanox interconnect technology. Delivering 9.4 petaflops of processing capability, it has the muscle for training the vast number of deep neural networks required for safe self-driving vehicles.
NVIDIA's Customers can buy this system in whole or in part from any DGX-2 partner based on our DGX SuperPOD design.
A single data-collection vehicle generates 1 terabyte of data per hour. Multiply that by years of driving over an entire fleet, and you quickly get to petabytes of data. That data is used to train algorithms on the rules of the road — and to find potential failures in the deep neural networks operating in the vehicle, which are then re-trained in a continuous loop.
Powered by 1,536 NVIDIA V100 Tensor Core GPUs interconnected with NVIDIA NVSwitch and Mellanox network fabric, the DGX SuperPOD can tackle data with peerless performance for a supercomputer its size.
The system is working around the clock, optimizing autonomous driving software and retraining neural networks at a much faster turnaround time than previously possible.
For example, the DGX SuperPOD hardware and software platform takes less than two minutes to train ResNet-50. When this AI model came out in 2015, it took 25 days to train on the then state-of-the-art system, a single NVIDIA K80 GPU. DGX SuperPOD delivers results that are 18,000x faster.
While other TOP500 systems with similar performance levels are built from thousands of servers, DGX SuperPOD takes a fraction of the space, roughly 400x smaller than its ranked neighbors.
And NVIDIA DGX systems have already been adopted BMW, Continental, Ford and Zenuity to enterprises including Facebook, Microsoft and Fujifilm, as well as research leaders like Riken and U.S. Department of Energy national labs.
The DGX SuperPOD isn’t the only in-house NVIDIA system to appear on the TOP500 list of the world’s fastest supercomputers.
NVIDIA’s SATURNV system was the first, debuting in 2016 at the top of the Green500 list, which recognizes the most energy-efficient systems in the world, and 28th on the TOP500.
Since then, the SATURNV Volta (powered by NVIDIA DGX-1 systems) and DGX-2H POD have also been recognized for their very high performance levels and efficient power consumption.
NVIDIA GPU-powered devices power 22 of the top 25 supercomputers on the latest Green500 list.
Bringing CUDA to Arm
At the International Supercomputing Conference, NVIDIA also anounced its support for Arm CPUs, providing the high performance computing industry a new path to build energy-efficient, AI-enabled exascale supercomputers.
NVIDIA is making available to the Arm ecosystem its full stack of AI and HPC software — which accelerates more than 600 HPC applications and all AI frameworks — by year’s end. The stack includes all NVIDIA CUDA-X AI and HPC libraries, GPU-accelerated AI frameworks and software development tools such as PGI compilers with OpenACC support and profilers.
Once stack optimization is complete, NVIDIA will accelerate all major CPU architectures, including x86, POWER and Arm.
NVIDIA GPU-powered supercomputers are capable of offloading heavy processing jobs to more energy-efficient parallel processing CUDA GPUs. The company is also collaborating with Mellanox to optimize processing across entire supercomputing clusters. NVIDIA’s In addition, invention of SXM 3D-packaging and NVIDIA NVLink interconnect technology allows for extremely dense scale-up nodes.