Home  >  Tech Talks  >  High-Speed Serial Memory Interfaces
High-Speed Serial Memory Interfaces
VCS version 2.2 and ID = 88

Michael Miller — MoSys

Introduction

Serial communications have come a long way from their original implementations in Morse Code transmissions via telegraph, undersea cables, and microwave. New generations of engineers have recognized the efficacy of serial solutions and implemented them at increasingly lower system levels. This progression has manifested in backplane communications standards such as PCI Express, XAUI, FDDI (for disk drives), InterLaken and more. Present day FPGAs, ASICs and NPUs sport SerDes (Serializer/Deserializer) 11+ Gbps interfaces. The benefits of these implementations result in cost-effective, lower pin count designs.

Designers today face the roadblocks of parallel configurations in device-to-device communications. Namely, they’re running out of I/O as networking memory device suppliers increase bandwidth by 2 – 4x to meet next-level networking equipment requirements. A new open standard protocol overcomes pin count, cost, efficiency, and energy consumption hurdles for small sized packets typically found in 10G, 40G and 100G networking memory applications.

Why Device-to-Device = Serial

Figure 1

Picture Sources: http://www.theinquirer.net/inquirer/news/1037859/there-magic-intel-fb-dimm-buffer;
http://www.simmtester.com/page/news/showpubnews.asp?num=113;
http://www.icc-usa.com/Fully-Buffered-DIMM-0305.pdf

Above left in Figure 1, the board is laid out using parallel interconnects. On the right, by comparison, the same design was executed using serial interconnects. One immediately observes the reduced real estate required by the serial layout on the right. The cause: Using serial communication increases bandwidth density, resulting in a reduced number of traces, since serial only requires a matched differential pair.

Serial protocols like XAUI, InterLaken and PCIe include deskewing mechanisms to handle any mismatch between data lines.

SerDes electrical standards groups support this development with the Optical Interface Forum (OIF), which provides for common electrical interface for 3, 6, 11 and 28 Gigabit onboard and backplane interconnects. Other relevant standards include IEEE 1, 10, 40, and 100 Gigabit Ethernet and FDDI (for disk drives), SATA, and USB.

Serial vs Parallel – And the Winner Is…

Serial latency is measured from the first bit in to the first bit out. System designers can approximate serial latency by applying the following formula: Serial latency = # of clocks to deserialize + deskew + memory latency + # of clocks on the first bit out.

Comparing a 1GHz DDR parallel to 10 Gbps serial transmission rate moving 80 bits of information, the parallel bus uses 40 data lines, a 2:1mux register, and clocks one half of the 80 bits on the rising and falling edges of the I/O clockin 1 ns. In contrast, serial transmission uses 8 pairs, serialization/deserialization registers operating at 10 GHz and completes the same task in 1ns as well. The data rate for serial transmission thus approaches the vanishing point of break-even compared with parallel buses. The only real difference is in the deserialization time. In short, a serial bus running at 10 Gbps runs as fast as a parallel bus.

In some applications, latency is not as important. For example, networking equipment transmits packets through a line card and across the backplane. In this case, high throughput is more valuable than overall latency. On the other hand, when the system needs to repeatedly access a memory block to process a packet, the latency of the memory is multiplied several fold. This determines how many packets must be stored and processed in parallel thus increasing the complexity of the storage architecture. In other cases, some tasks MUST be completed before the next packet in the same flow arrives. The latency of the memory and the interface again come into play. So latency can in fact be important.

New Serial Protocol Applied: 72b Transmission

As data rates increase and electrical signals shrink, the probability of an incorrectly sampled data bit increases, regardless of whether the interface is serial or parallel. At one GHz and beyond for telecommunications interfaces, even parallel interfaces need to consider some sort of error checking and handling solution. Because of their higher data rates and farther reach, serial interfaces have always required a higher-level protocol. As such, serial designs have always included mechanisms to insure data integrity. For example, today’s assumed error rate is 10-15. In PC board applications, SerDes devices may achieve a much better rate (10-18), but it’s hard to measure.

Many serial protocols are available to designers today including PCIe, SRIO (Serial Rapid IO), Interlaken Look-aside, and SFI-S. These protocols reliably transport variable length packets of information between multiple endpoints across congested switches with priority control. Point-to-point protocols like Interlaken handle multiple flows with variable length packets and per flow rate control, which requires multiple fields. Each of these fields adds to the minimum packet overhead, which lowers efficiency when transmitting small packets. When transporting 100 byte messages, the overhead for the data protocol consists of only 10% of the entire message, meaning the transmission efficiency is 90%.

For 72-bit data word transmissions needed in networking memory, the protocol overhead dominates and results in transmission efficiency of approximately 37%. For a 72-bit transmission, the protocol overhead for SRIO totals 64 bits, for PCIe totals 96b/128b, and the CRC totals 16b. Total overhead for SRIO totals 80 bits and for PCIe, up to 144b.

The question for designers: How to reliably transport data without the overhead, since these protocols address issues not relevant in point-to-point control? When transmitting a large packet of information, such as 64 bytes or more, a few bytes of overhead don’t matter. However, when transmitting a small packet of information using the same protocol, the number of bytes devoted to protocols does make a significant difference.

Figure 2

Figure 2 above demonstrates that if the transmission can eliminate most of the protocol overhead, the result is high efficiency. For Packet Data Units (PDUs) between four bytes and eight bytes such as transmission of single data words between and SOC device and memory, the transmission is very inefficient.

Figure 3

Using the GigaChip™ Interface (developed by MoSys®, Inc.) to transmit data in the range of four to eight bytes addresses only one thing: since bit errors occur infrequently (10-15), the transmission packets can eliminate most of the protocol overhead. As shown in Figure 3 above, this results in approximately a 90% efficiency to transmit 72 bits of data plus 8 bits for CRC and Ack. With CRC the probability of an undetected bit error drops to <10-23.

When the protocol becomes independent of the number of lanes or connections – serial designs become scalable. If designers select protocols carefully, and make them independent of the physical number of lanes, the industry can implement solutions in 1, 2, 4, 8, 16 lanes or more. The designer can either scale the number of lanes and/or scale the data rate on each one of those serial lanes. The reason scalability matters is that bandwidth has its own cost in terms of board materials, packaging materials, board real estate, etc.

Using high overhead protocols for 4-8 Byte PDUs results in inefficiency, which means that the design sacrifices bandwidth. Eliminating the protocol overhead of each transmission solves the inefficiency problem and results in very high-speed transmission of raw data. For example, at a data rate per channel of 16 Gbps using 16 channels, a serial connection will transmit 3.2 Giga operations per second (Gops) and 230.4 Gbps.

Signal Integrity

Nothing comes for free. As signals enter the GHz rage, designers must be aware of the signal integrity aspects of high-speed channels. This will be true whether the bus is parallel or serial. The trade-off being that parallel buses run slower, but have many more channels to model and control. Serial will be proportionally running at a much higher rate, but the number of channels is proportionally less. Because of the push on serial channels for moving packets across backplanes, tools have been developed to solve this problem. Keeping in mind that on the board, the channels will be much shorter and less complicated, thus making the task to design a high speed chip-to-chip link much more manageable.

Conclusion

In its simplest statement, the design choice between serial and parallel used to boil down to number of interconnects vs. speed. Because parallel requires a high number of interconnects, it increases device real estate, always a downside. Given the continued progression in transmission solutions, parallel buses are reaching the end of their natural life. Based on Moore’s Law as applied to packaging, to achieve double the bandwidth using parallel, the device must double the pins, which is no longer very likely. And for the same reason, parallel buses are not likely to double.

In response to the demise of parallel interconnects, MoSys has developed and introduced the new open standard GigaChip interface protocol. As a result, small device-to-device transmissions achieve 90% efficiency.

Explore MoSys IP here

MoSys is a registered trademark of MoSys, Inc. The MoSys logo and GigaChip are trademarks of MoSys, Inc.

Copyright © 2010 MoSys, Inc. All rights reserved.

Resources

MoSys, Inc. is a fabless semiconductor company enabling leading equipment manufacturers in the networking and communications systems markets to address the continual increase in Internet users, data and services. The company's solutions deliver data path connectivity, speed and intelligence while eliminating data access bottlenecks on line cards and systems scaling from 100G to multi-terabits per second.

CONTACT VENDOR

Find the component you need without hours of searching.

About the Author

Michael Miller

Michael Miller, VP of Technology Innovation and System Applications for MoSys, Inc., brings more than 30 years of experience in technology innovation, system architecture, software and complex logic to MoSys. Prior to joining MoSys, he held the position of Chief Technical Officer for Integrated Device Technology, Inc.. He also held engineering management positions in software, applications and product definition, serving IDT for more than 20 years. Mr. Miller holds a Bachelor of Science degree in computer science from the California Polytechnic State University at San Luis Obispo and has been awarded 25 patents to date.