Going With the (Data) Flow

Publish Date: Feb 03, 2015 | 19 Ratings | 4.95 out of 5 | Print | 1 Customer Review | Submit your review

Table of Contents

  1. Natural Data Dependency and Artificial Data Dependency
  2. Parallelism in LabVIEW
  3. Overuse of Flat Sequence Structures
  4. Making Things Worse With Stacked Sequence Structures
  5. The Better Way: State Machines

Engineering and scientific applications are primarily concerned with turning real-world signals into meaningful information for the purposes of measurement and control. As a result, data from hardware drives the behavior of these systems—making a language built around the data itself a natural expression of how these systems should behave.

Graphical data flow is the primary way to describe the behavior of an NI LabVIEW system. As implied by the name, these graphical diagrams literally depict the flow of information between functions, which in LabVIEW are called VIs. A VI executes when it receives all required inputs and afterward produces output data that is then passed to the next node in the dataflow path. The movement of data through the nodes determines the execution order of the VIs and functions on the block diagram.

Visual Basic, ANSI C++, JAVA, and many other traditional programming languages follow a control flow model of program execution. In control flow, the sequential order of program elements determines the execution order of a program, as opposed to the data itself. 

In LabVIEW, the flow of data rather than the sequential order of commands determines the execution order of block diagram elements. Consequently, LabVIEW developers can create block diagrams that have simultaneous operations. For example, two For Loops can run simultaneously and display the results on the front panel, as shown in the following block diagram.

Figure 1. Dataflow execution allows parallel operations in LabVIEW.

Creating multithreaded applications using control flow languages is also pos sible, but the one-dimensional nature of the code makes it difficult to understand and debug operations that are happening in two dimensions. Graphical code allows you to lay out parallel operations in parallel and dataflow semantics protect against hard-to-find bugs caused by race conditions.

1. Natural Data Dependency and Artificial Data Dependency

The control flow model of execution is instruction driven. Dataflow execution is data driven, or data dependent. A node that receives data from another node always executes after the other node completes execution. This is called natural data dependency.

Block diagram nodes not connected by wires can execute in any order. You can use flow-through parameters such as reference numbers or error clusters to control execution order when natural data dependency does not exist.

When flow-through parameters are not available, you can use a Sequence structure to control execution order—like a VI, the contents of a Sequence structure will not execute until data has arrived at any input terminals/tunnels. This can create an artificial data dependency in which the receiving node does not actually use the data received. Instead, the receiving node uses the arrival of data to trigger its execution.

Figure 2. Artificial data dependency was created by using the Flat Sequence structure and error wires for benchmarking code.

For more information on dataflow programming, access the self-paced online training (ni.com/self-paced-training) for LabVIEW Core 1 on Dataflow. Self-paced online training is free with every LabVIEW purchase or for users currently on the Standard Service Program (ni.com/ssp).

Back to Top

2. Parallelism in LabVIEW

With LabVIEW, you can easily create multitasking and multithreaded systems using data flow.

Multitasking refers to the ability of the operating system to quickly switch between tasks, giving the appearance of simultaneously executing those tasks. Older operating systems traditionally dedicated a single task per an entire application, such as Microsoft Excel or LabVIEW. Each application runs for a small time slice before yielding to the next application. Another form of scheduling is called cooperative multitasking, where the operating system relies on running applications to yield control of the processor to the operating system at regular intervals. More modern operating systems use preemptive multitasking, where the operating system can take control of the processor at any instant, regardless of the state of the application currently running. With preemptive multitasking, only one application thread runs at a time but processor speeds and thereby thread-swaps happen so fast that applications appear to be running simultaneously. Preemptive multitasking guarantees better response to the user and higher data throughput, but can be a risk for critical applications that are running. In applications where timing is critical, you can ensure that your applications won’t be interrupted by moving from a general-purpose operating system to a deterministic real-time operating system. Real-time operating systems give you ultimate control of task scheduling.

Multithreading extends the idea of multitasking into applications, so that specific operations within a single application can be subdivided into individual threads, each of which can theoretically run in parallel. Then, the operating system can divide processing time not only among different applications, but also among each thread within an application. For example, in a LabVIEW multithreaded program, the application might be divided into three threads—a user interface thread, a data acquisition thread, and an instrument control thread—each of which can operate independently. Thus, multithreaded applications can have multiple tasks running in parallel along with other applications.

Multicore programming refers to two or more processors in one computer (or a single processor with multiple cores that share a cache), each of which can simultaneously run a separate thread. A multithreaded application can have separate threads executing concurrently on multiple processors. In the LabVIEW multithreaded example, the data acquisition thread could run on one processor while the user interface thread runs on another processor. The extra cores allow multithreaded applications to potentially run faster.

One of the major benefits of using LabVIEW is that it automatically multithreads an application into various tasks, which are then load balanced by the operating system across available cores. As more cores are added to your computing platform, LabVIEW applications can often run faster without any extra programming. On the other hand, enabling multithreading in lower level languages such as ANSI C requires a lot more development time.

Figure 3. The measurement or control application is divided into tasks, which can be automatically balanced across CPUs.


When using real-time operating systems, you can even assign core affinity, which processor a certain code portion will run on, by using the Timed Loop or Timed Sequence structures. Both structures have an input terminal for core affinity.

For more information on timing in LabVIEW, access the self-paced online training (ni.com/self-paced-training) for LabVIEW Core 1 on Timing Functions.


Back to Top

3. Overuse of Flat Sequence Structures

You can use Flat Sequence structures to control the execution order of a block diagram. Flat Sequence structures have frames that operate in numerical order much like a film reel that flows from frame to frame. At the beginning of each frame’s execution, all the inputs wired to the frame are passed to items within the frame. At the end of each frame’s execution, outputs can be passed onto the next frame or outside the structure. It is not possible though to stop execution part way through the sequence and it is also not possible to revisit specific frames throughout the process.

Developers familiar with sequential programming are prone to using the Flat Sequence structure when they are just learning LabVIEW to explicitly control execution. In LabVIEW, introducing forced execution control can lead to lower performance. When used in moderation, the Flat Sequence structure is a useful tool but it can constrain optimal performance when used inappropriately. Tasks that can operate in parallel are forced to operate serially when the Flat Sequence structure is overused. The Flat Sequence structure may also be used unnecessarily when it is added to code that is already operating serially because of data flow. This doesn’t necessarily limit the performance but the extra code is not needed and risks reducing the readability.

Figure 4. Flat Sequence structure is inappropriately used because data flow already has this code executing serially.


Most experienced LabVIEW developers find that they do not need to use the Flat Sequence structure once they have a firm understanding of data flow.

For more information on the Flat Sequence structure and its use, access the self-paced online training (ni.com/self-paced-training) for LabVIEW Core 1 on Using Sequential Programming.


Back to Top

4. Making Things Worse With Stacked Sequence Structures

Stacked Sequence structures operate identically to Flat Sequence structures in that there are frames that execute in a predefined order. The main difference is that the Stacked Sequence structure shows only a single frame at a time.

This might be a benefit if you need to save space on your block diagram, but it decreases the readability of the block diagram. Although similar, Case structures guarantee that only a single frame is executed every time it’s called, as determined by the input terminal, whereas people reading code that uses stacked structures need to scroll through each frame (or state) to understand the overall functionality.

Another difference between Stacked Sequence structures and Flat Sequence structures is that to pass information between frames on a Stacked Sequence structure, you have to use a sequence local. Sequence locals also decrease the code readability as the locals can be on only one side of the frame. Most LabVIEW code is read left to right and if you have inputs coming from the right side of a frame (which the right side of the frame might have made sense in the frame before for its outputs), then it makes the code harder to read.

Figure 5. Sequence locals pass references and error clusters between frames.

The main downside to using Stacked Sequence structures is the same as Flat Sequence structures. The structures are not flexible in that you can’t revisit previous frames during execution and execution can be stopped only by completing every frame. This flexibility can come with a better architecture and most developers evolve from using Flat or Stacked Sequence structures to using state machines.

For more information on stacked structures like Case structures, access the self-paced online training (ni.com/self-paced-training) for LabVIEW Core 1 on Case Structures.


Back to Top

5. The Better Way: State Machines

A state machine is a mathematical model of computation that describes states in a logical execution. Each state is determined by parameters based on conditions defined by the architect. Most commonly, this can be thought of as a flowchart. Based on a series on inputs, execution goes through the flowchart passing from state to state.

Figure 6. This is an example flowchart for a furnace.

This is a flexible approach because you can revisit states many times throughout the execution and execution can cease during any state. You can find state machine architectures in most programming languages. LabVIEW state machines consist of a While Loop to continue the execution, a Case structure to define the different states, an enumerated constant that changes the states, and a shift register to drive the conditions between the states.

Figure 7. This is a simple state machine template from LabVIEW.

The state machine is preferred over using Flat or Stacked Sequence structures because you can easily add more states as functionality or system requirements change while maintaining the positives of switching between states and ceasing execution during any state.

Process testing is a common application of state machines. Each segment of the process is represented by a state. Depending on the result of each state’s test, a different state may be called. This can happen continually, performing in-depth analysis of the process being tested.

For more information on state machines and its uses, access the self-paced online training (ni.com/self-paced-training) for LabVIEW Core 1 on Understanding State Programming and LabVIEW Core 2 on Simple State Machine.

LabVIEW was built to make engineers and scientists more successful at tackling the world’s tough challenges. The benefit of having a large programming community of engineers and scientists is that they like to share their knowledge with others. If you have your own LabVIEW rookie mistake that you would like to share, add your voice by visiting bit.ly/lvrookiemistakes.

This article is part of a series on the Top 5 LabVIEW Rookie Mistakes. Subscribe to NI News to keep up with the series.

Back to Top

Customer Reviews
1 Review | Submit your review

Emphasis should be on ease of...  - May 22, 2013

I think this document is interesting and outlines the advantages of using graphical data flow. However, we should not provide biased information or overemphasize certain traits. Support for multithreading being one thing that should be treated carefully (e.g. ANSI C++ does not have language support, it uses library support for that, etc. JAVA has built-in). I would not consider the document as being accurate in this respect (maybe even I am not correct). Would it be wrong to say I could (if I had time :)) create a new language which merges the functionality of my favorite library into an existing language, and even name it however I want? I think focus should be on the ease of implementation, of debugging, of creating parallel tasks, etc, and extreme care exercised when comparing with other languages under which many of the LabVIEW libraries have been developed, to which it links, and under which the very LabVIEW and OS' have been developed.

Bookmark & Share


Rate this document

Answered Your Question?
Yes No