search slide
search slide
pages bottom

SK Hynix adds HBM2 to its product stack, plans Q3 availability

When AMD and Nvidia announced their product refreshes this year, both companies chose to stick with GDDR5 or GDDR5X for their respective GPUs. For AMD, this was a straightforward business decision — High Bandwidth Memory (HBM) is too expensive for the $100 – $240 segment that the company’s Polaris GPU targets, while Nvidia opted for GDDR5X since HBM2 wasn’t ready yet. A recent update from SK Hynix makes it clear that HBM2 is tiptoeing towards launch, with availability expected as soon as Q3 of this year.

According to its catalog, SK Hynix will offer a 4GB stack of HBM2 at both 1.6GT/s (GT = gigatransfers) and 2.0GB/s. Each layer will pack four 1GB layers, so a four-way HBM2 interface in late 2016 or early 2017 will pack 16GB of HBM2 memory — more than 4x the amount of RAM that AMD was able to use on its Fury and Nano products in 2015.

sk_hynix_hbm2_implementations

Maximum memory bandwidth also gets a huge boost with HBM2. While Fury X topped out at 512GB/s, HBM2 is expected to double that, up to a massive 1,024GB/s of memory bandwidth. That much memory will give modern GPUs plenty of room to spread their wings when top-end cards debut later this year — so much so, that AMD and Nvidia should finally be able to deliver real 4K performance without compromising on eye candy or visual effects. Then again, it’s not clear if Nvidia will adopt HBM2 for consumer GPUs this product cycle. Its already-announced Nvidia Titan X and GTX 1080 both use GDDR5X and don’t appear to have any problems with memory bandwidth bottlenecks. We know the company’s GP100 Pascal GPU will use HBM2, but that card is slated for the HPC and deep learning markets, not the consumer space. AMD’s upcoming Vega, in contrast, is explicitly an HBM2-equipped consumer market and is expected to go toe-to-toe with the already-launched GTX 1080 and 1070 for the top end of the consumer market.

In the long run, HBM2 has the potential to revolutionize the concept of integrated graphics, if the costs can be brought down enough to make the technology practical for AMD’s APUs. An integrated GPU wouldn’t need anything like the bandwidth HBM2 offers at the high-end. One stack of 8-Hi memory at 1GT/s would still deliver 128GB/s of memory bandwidth and 8GB of on-die RAM as a unified pool of CPU and GPU memory. A dual-channel DDR4-3200 memory interface, in contrast, tops out at 51.2GB/s.

There’s no guarantee AMD will go this route, and the first Zen APUs are almost certainly going to be based on DDR4. HBM2, if it appears on APUs, probably won’t happen until 2018. In the long-run, however, HBM2 could give AMD the answer to Intel’s Crystal Well on-die EDRAM that it definitely needs.

Leave a Reply

Captcha image