Semiconductor IP News and Trends Blog
Why Is the Evolving HBM3 Such a Revolutionary Technology and How Can You Be Ready for It?
Since 2013, we have seen the HBM specifications being released by JEDEC and companies announcing in the same month HBM products just like magic. How can these companies have a silicon-proven product altogether with newly announced official specifications? What is the secret process they use in order to be constantly first-in-market with these new technologies? And more important, what can YOU do in order to also be such an early adopter and be first with the next HBM3 generation specification which is still under development by JEDEC?
To answer these questions, let’s go back in time and look more closely on the facts.
It all started with the release of HBM (High Bandwidth Memory) technology in October 2013 by JEDEC. HBM proved to achieve higher bandwidth while using less power in a substantially smaller form factor than DDR4 or GDDR5. This high-performance RAM interface for 3D-stacked SDRAM allowed stacking up to eight DRAM dies, thus being a Three-dimensional integrated circuit.
Then in January 2016, JEDEC released the second generation HBM2, specifying up to eight dies per stack and doubled pin transfer rates up to 2 GT/s. Retaining 1024‑bit wide access, HBM2 was able to reach 256 GB/s memory bandwidth per package. Within the same month of the HBM2 specification release, Samsung announced early mass production of HBM2, at up to 8 GB per stack. A few months later, SK Hynix also announced the availability of 4 GB stacks.
In late 2018, JEDEC announced an update to the HBM2 specification, providing for increased bandwidth and capacities. Up to 307 GB/s per stack (2.5 Tbit/s effective data rate) was now supported in the official specification commonly known as HBM2E. Additionally, the update added support for 12‑Hi stacks (12 dies) making capacities of up to 24 GB per stack possible. On March 20, 2019, Samsung announced its Flashbolt HBM2E, featuring eight dies per stack, a transfer rate of 3.2 GT/s, providing a total of 16 GB and 410 GB/s per stack.
Since then, HBM technology isn’t stopping there and JEDEC HBM Task Group is already working full gear on the next HBM generation to take this technology to the next level. HBM3 will double the density of the individual memory dies from 8Gb to 16Gb (~2GB), and will allow for more than eight dies to be stacked together in a single chip. Graphics cards with up to 64GB of memory are possible. By using a 512bit bus with the higher clocks, the new standard can achieve the same higher bandwidth with much lower cost by not requiring a silicon interposer at all.
Even though the HBM3 specification is still under development, it is very clear that semiconductor companies don’t plan on waiting for the official specification to be released in order to work on their designs. Time to market is key in this rapidly evolving market and in order to ensure that this new technology is compliant to the new ratifications, Cadence is already developing a new HBM generation VIP which keeps on getting enhanced together with the HBM3 specification, in order to allow design engineers verifying their evolving design.
Yes, this is the secret ingredient that allows semiconductor companies to be first-in-market with new specification releases. While these companies follow closely the specification evolution and work progressively on their design, Cadence provides a VIP which allows verifying specification compliance for both protocol and timing checkers and with a coverage model to ensure that no silicon escape will happen.
With the availability of the Cadence Verification IP for HBM3, early adopters and JEDEC members can start working with the provisional specification immediately, ensuring compliance with the standard and achieving the fastest path to IP and SoC verification closure.