The ubiquitous DDR memory interface is something that is often taken for granted with many people happy with an implementation that they consider "good enough". If consideration is given to the DDR implementation it is usually in cases of extreme performance or power requirements where the interface to memory is a make or break decision. In modern systems, however, the SoC's memory subsystem is central to overall performance, power and cost, and acceptance of "good enough" from either internal or third party DDR implementations carries significant penalties.
In this article we examine optimization of the interface to external DDR memory and how memory bandwidth optimization affects power and cost in SoC and system design.
What is bandwidth optimization for DDR?
Before reviewing the benefits of optimization, let's consider what is DDR interface bandwidth optimization. Simply stated, it is any technique that takes advantage of knowledge of the memory device implementation to improve our interaction with it.
In a system this usually involves software optimizations to reduce external memory accesses by employing on-chip registers and buffers. This is a very effective technique but it is out of the scope of SoC designers who have little control over the system software.
At the SoC level we still have significant opportunities for optimization by exploiting knowledge of the memory devices attached to the system - supported addressing modes, latency, power modes, etc. - and combining this knowledge with an understanding of pending memory transactions. Using this knowledge we can influence the order of pending transactions and modify memory transactions (split or join) to make optimal use of the memory device.
In the Cadence Design IP for DDR we refer to this collection of techniques as Placement Queue, and with the recent LPDDR3 announcement Placement Queue was updated to version 2.2. Placement Queue 2.2 not only benefits Cadence Design IP for LPDDR3, but all Cadence Design IP memory technologies. While the exact improvements of Placement Queue 2.2 are traffic dependent, Table 1 provides some examples of specific features, which when implemented together can provide significant improvement in bandwidth to the external DRAM device.
Table 1: Placement Queue Features
Bandwidth optimization and effect on power
Reducing memory interface power is typically managed by moving to a low power standard like LPDDR3 or DDR3L. While these low-power interfaces do provide significant advantages, there are other opportunities to reduce memory power regardless of the standard that is being used.
For example, the speed grade of external memory and its active duty cycle - that is, the ratio of time spent in active versus low-power/standby mode - are key to the power consumption of the memory subsystem. If we can improve either of these we can see significant power savings.
The techniques utilized by Placement Queue automatically reorder transactions to the external memory devices to reduce cycles wasted due to unnecessary row and column changes, and take advantage of inherent latency in the DRAM devices. This provides an overall bandwidth improvement, which can then be traded off against power. While maintaining the original system throughput, moving to a lower speed interface can save significant system level power without degrading performance. In systems that are bandwidth-to-memory limited, we can also reduce the overall operating frequency of the device for further power savings.
In addition to bandwidth optimization, Placement Queue will manage memory transactions to take advantage of the many low-power modes available in memory device. This ensures designs can fully leverage the benefit of moving to newer memory classes.
Reducing system cost through DDR optimization
Bandwidth optimization impacts system cost in two ways, both linked to the previous discussion around power.
Moving to lower speed grade memories has a direct reduction on system cost. Typically we can approximate a $0.15 per device decrease in cost per speed grade for DRAM, so for a system using eight or more devices, this can represent a significant bill of materials (BOM) savings with no impact on overall system performance or functionality, especially for high volume products. In most cases even larger savings come from the decreased power requirements of the SoC due to the improvements in the memory subsystem. Lower SoC power consumption leads to smaller and cheaper batteries, and less expensive SoC and system packaging. A small upfront investment in an advanced DDR controller pays off many times over in overall product savings.
Real world results
While results vary greatly by application, when working with customers to optimize the DDR interface we commonly see that material improvements provide returns many times over the upfront investment.
Cadence Design IP for DDR DRAM offers a high degree of configurability, which enables the DDR controller to fit the exact needs of the SoC being developed. For example, while working with a customer building a multimedia SoC, we were able to lower the speed grade needed to perform HDTV decoding, while improving the overall system experience when compared to their existing DDR solution. This improved the competitiveness of their product and enabled their customers to benefit from the reduced system BOM.
High bandwidth memories currently under development make optimization more important than ever, which may seem counter-intuitive. Optimizing memory accesses into high performance "bursts" will increase the time memories can spend in low power modes to minimize leakage.
The pace of change in the memory industry shows no signs of slowing, and it is no longer practical for a design group to stay ahead of all developments. To fully take advantage of each new generation of memory requires insight into the design of the memory devices and tight partnerships with the memory vendors. Design teams require highly configurable IP and memory optimization technologies to enable compelling and competitive products while and freeing internal resources to work on core differentiation.
Neil Hand is a Group Director of Marketing in Cadence's SoC Realization group focusing on the challenges of SoC Realization, including design IP, design services, and chip planning and management. Mr. Hand has over 20 years of experience in systems and SoC design, customer support, and marketing.