Rambus Dynamic Random Access Memory

RDRAM

1 Introduction

The speed & power of computer processors increased almost with each passing day. This phenomenon is formally expressed in the Moore Law, which states that processor speeds double every 18 – 24 months. However, improvement of memory speeds have been far less dynamic. This has led to what is known as the “Memory Gap”, the difference between processor speed & memory speed. Traditionally, this problem has been overcome by the use of caches.

DRAM is the preferred memory architecture for computers. During the past several years, hardware manufacturers have looked at alternate ways of overcoming the memory gap of processors & DRAM. The new methods have not changed the basic storage array that forms the core of a DRAM; the key changes are in the interfaces.

One such popular technology is RDRAM. An acronym for Rambus DRAM, named after the company that introduced the technology, RDRAM is both an architecture as well as a protocol. It provides the highest bandwidth for the fastest data rates, with the least number of pins.

This short paper gives a brief overview of RDRAM, its uses & limitations.

2 Technical Overview

Before delving into the technical features of RDRAM, it is necessary to introduce some basic features of SDRAM. It will help appreciate the benefits of RDRAM.


The above figure shows the architecture of an SDRAM system. The DIMM modules are connected to the memory controller by a 64-bit data bus. Address & control buses are connected using a different topology from that of the data bus. Row address & column address (RAS & CAS) signals share a common bus, with the controller scheduling the sharing.

SDRAMS of current PC’c operate at 133 MHz maximum. Address & Data are transferred in only one transition of the clock.

A Memory Bank consists of 8 SDRAM chips, each contributing 8-bits of data. A DIMM can have 1 or 2 Memory Banks. When a memory location is addressed, all 8 chips are enabled to make up 64 bits of data. Thus to upgrade memory, a minimum of 8 SDRAM chips are needed. We say the minimum upgrade granularity is 8 SDRAM’s

RDRAM architecture introduces several new features. The memory controller
has a high speed interface called the Rambus ASIC Cell (RAS). The RDRAM’s too have an interface of identical speed. A 16-bit data bus, known as the Rambus Channel connects the two.

These features allow very fast data transfer. Further, address & data can move simultaneously, in parallel.

Unlike in SDRAM, where each Bank has its own set of address & control lines, Rambus Channel routes through the RIMM modules. Thus, the number of wires are reduced, they are of uniform impedance due to identical lengths, & they are equally loaded.

This structure also results in a unity granularity, since only one 16-bit RDRAM will be transferring data at any given time. Eight control lines, split into row & column control perform addressing.

Higher Effective Bandwidth

The introduction of high speed interfaces allow upto 800 million samples/second across each wire. This enhances the bandwidth of the memory system.

Further improvements are achieved by the use of an efficient, packet based protocol. This is described below.


The above figure shows the timing diagram for SDRAM data transfer. Two points can be observed :

·  Row & Column addressing take up two clock cycles each

·  Placements of data relative to read & write commands are different. Thus a ‘Bubble’ delay is introduced whenever a read follows a write or vice versa.

These reduces the potential bandwidth by increasing latency.

A further point of contention is Bank Conflict. See diagram below.


When back-to-back read operations are performed on the same row of memory, a 1-cycle delay is introduced to ensure data is correctly interpreted by the user. This increases latency, & reduces bandwidth.

Protocols employed in RDRAM overcome these weaknesses. Refer diagram below.


As can be seen, the placement of data relative to row & column addresses are similar for both read & write operations. This results in zero-loss of bandwidth when transferring from a read operation to a write operation, & only a small loss of bandwidth (by the presence of a bubble) when performing a read after a write.

Further, row & column addressing is done through separate lines. This enables pipelining of address data, which makes more efficient use of the control lines. This is not possible with SDRAM, since it shares a common bus for RAS & CAS.

Bank conflicts can still occur in RDRAM. The method employed to overcome this is the use of many more banks than SDRAM. This reduces the probability of a bank conflict.

3 Applications
PC100 / DDR266 (PC2100) / DDR2 / RDRAM
Potential Bandwidth / 0.8 GB/s / 2.133 / 3.2 GB/s / 1.6 GB/s
Interface Signals / 64(72) data
168 pins / GB/s 64(72) data
168 pins / 64(72) data
184 pins / 16(18) data
184 pins
Interface Frequency / 100 MHz / 133 MHz / 200 MHz / 400MHz
Latency Range / 30-90 ns / 18.8-64 ns / 17.5-42.6 Ns / 35-80 Ns

Two of the most notable advantages of RDRAM’s are the higher per-pin bandwidth & higher per-device bandwidth. These features bring about numerous benefits

The reduced number of pins results in simpler & narrower buses. Further, the protocols facilitate simpler interfacing. This makes the design of motherboards simpler. Generally, the number of layers in a motherboard can be reduced by using RDRAM. The direct benefit is reduced cost.

The higher bandwidth per device makes them attractive for applications that require high bandwidth, yet only a moderate memory. Video games demand such requirements. The use of RDRAM’s effectively addresses this need. If the same bandwidth were to be achieved through SDRAM, it would result in redundant memory capacity.

The unity device upgrade granularity offers engineers the ability to balance performance requirements against system capacity and component count.

All of the above lead to more efficient use of components, which translates to lower system costs.

Latest PCs, PC workstations, servers, game consoles, HDTVs, set-top boxes, printers, business projectors, displays, network attached storage servers, core-edge router, switches and a many other products use RDRAM.

Matters of concern

Despite the benefits of RDRAM, pricing remains a matter of concern. RDRAM’s are significantly more expensive that the SDRAM counterparts. Different schools of thought attribute this to several factors.

1.  As component usage becomes efficient, sales drops. The low yield prompts higher prices.

2.  The inclusion of faster interfaces increases the die size. Hence the extra cost.

3.  Royalties charged by Rambus.

Although RDRAM has an increased bandwidth, it does not show a great improvement in latency. This is due to the fact that latency is not totally dependent on the structure of memory itself, but on many other factors such as loading.

A further concern is the issue of Flight time. Because the Rambus channel is routed through the RIMMs, the channel is longer for some RIMM slots than others. Thus, the Rambus channel is levelized so that all RDRAM’s have the same access latency from the memory controller's point of view. This is done during initialisation by adding a delay to the closest device. This adds to the total latency.

4. Conclusion

The use of RDRAM has not reached the frenzied levels expected by manufacturers, due to the higher device costs. Most devices do not require the high bandwidths available in RDRAM, which further brings down sales.

However, it has to be acknowledged that the technology consists of some attractive features for future memory systems. As the memory gap further widens, the need for better systems would be felt. Bandwidth requirements are bound to increase.

Therefore, RDRAM is most likely to occupy a position of prestige in the near future, when it comes to RAM systems.

References :

www.rambus.com

http://www.hardwarecentral.com/hardwarecentral/reviews/1787/1