This topic contains guidelines for minimizing RT target CPU usage. By targeting a CPU usage well below 100%, you can minimize jitter and ensure that the tasks in your application do not need to compete for CPU time.

Refer to the following topics for general LabVIEW performance tips:

  • VI Execution Speed
  • Memory Management for Large Data Sets

Run Loops Only as Fast as Necessary

Although it can be tempting to try to run each loop in your application as fast as possible, this practice can lead to undesired timing behavior, including increased jitter and even system deadlocks. For example, running a user interface data publishing loop faster than the human operator can process and respond to the data merely taxes the CPU of the RT target needlessly. In most cases, a rate of 2 Hz to 15 Hz is adequate for a loop that publishes user interface data over the network.

Tip Create a time budget to help determine an appropriate rate for each loop in your application.

Avoid Excessive Parallelism

The graphical dataflow paradigm of the LabVIEW block diagram makes it easy to parallelize the execution of code in your VIs, which can increase performance on multi-core systems. However, greater parallelism also requires LabVIEW to create and manage more threads. The overhead caused by these additional threads can impact performance. Generally, parallelism only increases performance on multi-core systems.

  1. The RT target includes multiple CPU cores.
  2. The total computation time required to execute the code serially exceeds the time required to execute the longest parallel branch plus the time required for thread management and switching overhead.

To determine whether a VI can benefit from parallelism, you might need to benchmark both the serial form and the parallel form of the VI. Refer to Optimizing RT Applications for Multiple-CPU Systems for information about using parallelism on multi-core RT targets.

Understand the Performance Benchmarks for Network-Published Shared Variables

Network-published shared variables exhibit a linear relationship between CPU utilization, data transfer frequency, and the number of variables in an application. Refer to the performance benchmarks, available at ni.com, for information about the CPU performance of network-published shared variables. You should use network-published shared variables primarily for low-frequency, latest-value data transfer applications. If you need to send a continuous stream of data from one computing device to another computing device, use the Network Streams functions. In general, you can use a large number of network-published shared variables without over-utilizing CPU resources if you use a low data transfer frequency. However, if you need to optimize CPU utilization by reducing the number of network-published shared variables, consider packing the individual data items into an array or cluster and transferring the array or cluster using a single network-published shared variable.

Offload Tasks When Possible

To minimize CPU usage on the RT target, consider offloading certain tasks either to a desktop PC or an FPGA target, if available. For example, consider hosting network-published shared variables on the host PC rather than on the RT target itself.

Note If you need to access a single RT shared variable from multiple PCs, host the shared variable on the RT target.

Use the following guidelines to determine the most appropriate device for performing specific types of tasks:

Task Appropriate Devices
Data acquisition RT or FPGA
Control loop RT or FPGA
Data analysis for logging or monitoring purposes (offline analysis) Desktop PC
Data logging RT or Desktop PC
Hosting network-published shared variables Desktop PC or RT
Note You can use the Real-Time Trace Viewer to determine which VIs and threads in your application use the most CPU time.