AMD Details Preps High Bandwidth Memory Stack
AMD today provided more details about the High Bandwidth Memory (HBM) technology that will be features in the company's upcoming GPUs and is expected to replace GDDR. HBM involves stacking SK Hynix high bandwidth memory (HBM) DRAM dies on top of each other, and then locating them next to or on top of a processor, the close proximity allowing for a high bandwidth memory bus.
AMD claims the approach will deliver more than 100 Gbytes/second of memory bandwidth, up from 28 GB/s using external GDDR5 DRAMs in today's boards. In adition, HBM should better than triple their memory perf-per-watt compared to GDDR5, with a better than 50% reduction in power consumption.
The GPU die and DRAM stack will sit side-by-side on a silicon interposer in a '2.5-D stack,' a technique first pioneered in FPGAs by Xilinx.
Although the HBM stack runs at a slower clock rate than GDDR5 chips (500 compared to 1,750 MHz), the HBM chips sit on a 1,024-bit link compared to a 32-bit interface for GDDR5. The HBM stack also runs at a lower voltage than GDDR5 (1.3 versus 1.5V).
Besides the higher bandwidth per watt (35 GBytes/s of ) compared to GDDR5, the HBM stack fits into a 35mm 2 area, 94 percent smaller than the GDDR5 chips required to deliver as much capacity. AMD estimates that a single HBM-equipped GPU package would be less than 70mm X 70mm (4900mm2), versus 110mm X 90mm (9900mm2) for the AMD R9 290X.
AMD hasn’t announced which GPU chip it will use on the device or when it will ship.
Rival Nvidia previously announced work with SK Hynix's competitor Micron on a next-generation memory stack called the hybrid memory cube (HMC). Nvidia said it's GPU stack will be made using a TSMC process called CoWoS (chip on wafer on substrate). Nvidia has not clarified which of its graphics chips will use HMC or when such parts will be ready. But it seeems that Nvidia is at least a year behind AMD on stacked DRAM.