Table Of Contents

Alleviating Additional Latency

Last Modified: September 9, 2019

Network streams utilize network bandwidth as efficiently as possible while still maintaining reasonable latency.

For example, when the write node writes data to the stream, the stream may hold onto the data for some time before transmitting it across the network. The stream holds the data to bundle multiple writes in succession together and sends it as a single large TCP packet across the network rather than a series of smaller packets.

While this helps to minimize the amount of bandwidth wasted due to TCP packet overhead, it also increases the default latency for low-throughput data streams. If the stream is being used to communicate commands between the two applications, this additional latency may be undesirable. Follow the steps below to alleviate this additional latency.

  1. Add the Flush Stream node to the diagram immediately after the write node and wire the two nodes together.
  2. Right-click the timeout in ms input on the Flush Stream node to create a constant.
  3. Give the timeout in ms input a timeout of 0.

The Flush Stream node forces the writer endpoint to immediately send all data to the reader endpoint without waiting for the data to be received or read from the reader endpoint. This programming technique provides the lowest latency possible for sending the data without blocking execution of the writer application. However, it also uses more memory. Use the element allocation mode of the Create Network Stream Writer Endpoint and Create Network Stream Reader Endpoint nodes to allocate initial buffer memory based on the data type input.

spd-note-note
Note

Additional memory allocates dynamically if an element requires more than the pre-allocated amount of memory at run-time.


Recently Viewed Topics