There is a long latency between data requests and the execution of the requests due to the pipeline of the DRAM architecture.

Therefore, NI recommends creating your design with a “look ahead” approach. Keep requesting the data you wish to read, without waiting for the retrieve method's data valid strobe. The requests for reads and writes get stored in a queue inside the memory item and then passed on to the memory, which also has an internal queue where it picks out commands in order. Even though each individual request is still subject to latency and some non-determinism, you now get much higher transfer rates because DRAM can access several pieces of data sequentially instead of treating each request separately.