Samsung Introduces New 'Flashbolt' HBM2E Memory Technology For Data Centers, Graphic Applications, and AI
Samsung Electronics announced its new High Bandwidth Memory (HBM2E) product at NVIDIA’s GPU Technology Conference (GTC) to deliver high DRAM performance levels for use in supercomputers, graphics systems, and artificial intelligence (AI).
The new solution, Flashbolt, is the first HBM2E to deliver a 3.2 gigabits-per-second (Gbps) data transfer speed per pin, which is 33 percent faster than the previous-generation HBM2. Flashbolt has a density of 16Gb per die, double the capacity of the previous generation. With these improvements, a single Samsung HBM2E package will offer a 410 gigabytes-per-second (GBps) data bandwidth and 16GB of memory.
Samsung is showcasing and discussing Flashbolt at GTC and will hold sessions to discuss memory technology applications in AI.
High Bandwidth Memory (HBM) is a high-performance RAM interface for 3D-stacked DRAM from Samsung, AMD and Hynix. It is to be used in conjunction with high-performance graphics accelerators and network devices.
HBM achieves higher bandwidth while using less power in a substantially smaller form factor than DDR4 or GDDR5. This is achieved by stacking up to DRAM dies, including an optional base die with a memory controller, which are interconnected by through-silicon vias (TSV) and microbumps.
High Bandwidth Memory (HBM) is a high-performance RAM interface for 3D-stacked DRAM from Samsung, AMD and Hynix. It is to be used in conjunction with high-performance graphics accelerators and network devices.
HBM achieves higher bandwidth while using less power in a substantially smaller form factor than DDR4 or GDDR5. This is achieved by stacking up to DRAM dies, including an optional base die with a memory controller, which are interconnected by through-silicon vias (TSV) and microbumps.
AI models use a significant amount of memory. Training neural networks can require gigabytes to tens of gigabytes of data, creating a need for the latest in capacity requirements offered in DDR. To improve the accuracy of an AI model, data scientists use larger datasets. Again, this either increases the time it takes to train the model or increases the memory requirements of the solutions. Due to the massively parallel matrix multiplication required and the size of the models and number of coefficients needed, external memories are required with high bandwidth accesses. New semiconductor interface IP such as High Bandwidth Memory (HBM2) and derivatives (HBM2e) are seeing rapid adoption to accommodate these needs.