CompactRIO DRAM Interface
- Updated2025-12-16
- 3 minute(s) read
Several CompactRIO targets contain onboard DRAM that is directly accessible from the LabVIEW FPGA VI. LabVIEW allows DRAM to be accessible as an FPGA Memory Item.
To determine the available amount of onboard DRAM, refer to the Specifications document of the CompactRIO target.
Using DRAM Effectively with CompactRIO
The following design considerations can affect the throughput and storage capacity that you can achieve with the dynamic random access memory (DRAM) interface of the FPGA on CompactRIO devices:
- Access size and frequency
- Request pipelining
- Sequential access
Access Size and Frequency
The access size is the amount of information stored in one memory address. You can set up memory to use a variety of data types. To achieve the best performance and to utilize the maximum amount of data, use a data type that matches the access size of the controller. The access size is the exact number of bits that are written and read in a given memory access.
The following table shows the specifications for CompactRIO targets that support FPGA-accessible DRAM, including the optimum access size.
| CompactRIO Target | Number of DRAM Banks | Size per Bank | Maximum Theoretical Bandwidth per Bank | Access Size |
|---|---|---|---|---|
| cRIO-9034 | 1 | 128 MB | 1.6 GB/s | 128 Bits |
| cRIO-9039 | 1 | 128 MB | 1.6 GB/s | 128 Bits |
Data Recommendations
If you use a data type that is smaller than the access size, the remaining bits receive an unknown and invalid value, but still get written and take up both space and bandwidth. For example, if the access size is 128 bits wide and you choose a 32-bit data type when configuring the DRAM, the remaining 96 bits are of an unknown and invalid data type. The following figure shows an optimized memory element and a memory element in which the data type is smaller than the access size.
Push data into the memory item interface within the DRAM clock domain, which is 100 MHz. Right-click the FPGA target in a LabVIEW project, select New»FPGA Base Clock, and choose the DRAM Clock resource. Bandwidth is maximized when data is pushed into the memory item interface at the clock rate.
Request Pipelining
The DRAM architecture is highly pipelined, resulting in relatively long latency between data requests and the execution of the requests. NI recommends that prerequest samples, which helps maintain high throughput.
To prerequest samples, request the samples you want to read without waiting for the data valid strobe of the retrieve method. Even though each individual request is still subject to latency and some non-determinism, you now get much higher transfer rates because DRAM can access several pieces of data sequentially instead of treating each request separately.
Sequential Access
DRAM is optimized for high storage density and high bandwidth. DRAM accesses data sequentially and in large blocks. For example, you have to read the data in address 0x1 after you have read the data in address 0x0 and the processor reads large blocks of memory into cache, even if the program being executed requests a single byte.
To maximize performance, avoid switching between reading and writing, accessing non-contiguous addresses, or writing to memory in decrementing-address fashion. The most efficient access strategy is to perform only one type of access, either reading or writing, on a large number of sequential addresses. Although this is optimal, it is not practical for most applications. A more practical approach is to maximize the amount of sequential data being accessed and minimizing changes in access modes.