1. The Traditional Process of Deploying .m Files to Embedded Hardware
Traditional design approaches often require you to abandon not only the prototyping platform when you reach the final deployment phase but also your developed code, which results in additional time and cost in the development cycle. Developers have used low-level programming languages, such as ANSI C and C++, for years, and they share a somewhat common syntax familiar to programmers. However, the lack of “built-in” signal processing libraries that are portable from platform to platform, or hardware target to hardware target, requires these algorithms to either be integrated from a third party or developed from scratch, which is both error-prone and time-consuming. In addition, the underlying textual syntax of the language may not intuitively map well to the numeric algorithms being expressed. Consider the example in Figure 1.
[A,B,C,D] = cheby1(n,R,Wp,'ftype','s')
Figure 1. The graphical interface of LabVIEW is often more intuitive than the
underlying textual syntax of low-level languages.
In contrast to low-level languages, math software packages, such as Digiteo Scilab, The MathWorks, Inc. MATLAB® software, Maplesoft Maple, and Wolfram Research Mathematica, employ high-level abstract languages intended for mathematical exploration. However, deploying those algorithms can prove difficult because there is no direct translation from the implementation language to hardware. This translation is often a multistep process (see Figure 2) that involves not only the burden of intermediate languages and additional tools but also an additional process to verify that the translated code is equivalent to the initial implementation.
Figure 2. The process for deploying an .m file developed in a traditional mathematics
tool to a multicore real-time hardware target can involve several complex steps.
Because these tools employ highly abstract languages, they lack some key characteristics necessary for hardware deployment. Consider .m file scripts used by MATLAB, Scilab, and others. The .m file language is loosely typed, meaning the data type of a variable can change at run time without explicit casting. Although this can be valuable in a desktop environment where memory is abundant, dynamically changing a variable’s data type during an operation introduces jitter, which could violate the application’s timing constraints in a real-time scenario. The lack of explicit resource management functions and timing constructs further complicates the deployment of the .m file language to embedded hardware.
In addition to the cost of developing in a new environment, disjointed toolchains pose other challenges. Different or incompatible math libraries require integrating new libraries or developing algorithms from scratch, which can be time-consuming and error-prone.
2. MathScript Is the Easiest Way to Deploy Your Custom .m Files to Embedded Hardware
Although the MathScript RT Module generally uses the same .m file syntax as other .m environments, MathScript implements the language according to the rules of a true programming language. The 2009 release of the LabVIEW MathScript RT Module culminated a three-year redesign of the MathScript Engine. MathScript was reengineered to optimize code for deterministic execution on embedded hardware.
The MathScript RT Module enforces strict typing on the .m file language within the context of the LabVIEW graphical environment for efficient data type propagation throughout the underlying code. Unlike other .m file environments, data within MathScript are assigned data types and not treated as numeric matrices. Doing so ensures that LabVIEW can efficiently compile the text-based MathScript code and optimizes the .m file language for real-time OSs. The propagation of data types throughout the code minimizes the amount of times that the compiled code has to touch memory, which introduces jitter into embedded applications.
The biggest difference between MathScript and other .m file environments is that the code is compiled. Compilers offer considerable advantages for programming, and the G compiler in particular has significant advantages.
The LabVIEW MathScript compiler, which lives “under the hood” of the MathScript Node, compiles the .m file code into graphical code at edit time. This identifies semantic and syntax errors in the .m file code and underlying function calls. With the MathScript compiler, you can interact with the text-based code, but still pass G code to the LabVIEW compiler. This helps the generated code benefit from the optimizations provided by the G compiler. A primary benefit of the LabVIEW compiler is the ability to express parallelism naturally. LabVIEW programmers do not need “special” or “artificial” markup in their code to force parallelism on the compiler, as is needed in text-based languages. In addition to the intuitive mapping of the language representation, the LabVIEW compiler provides several optimizations. Figure 2 displays two examples of the optimizations that the LabVIEW compiler provides.
Figure 3. Loop fusion and constant folding are two examples of the
optimizations that the LabVIEW compiler delivers.
Loop fusion eliminates unnecessary indexing operations while constant folding eliminates unnecessary code execution that always produces the same result. It is important to understand that the compiler does not change the code on your diagram, just the compiled representation of that code.
The changes made to the MathScript compiler in the LabVIEW 2009 MathScript RT Module improve the performance of the generated code and give the LabVIEW compiler the ability to further optimize that code.
3. How to Deploy Your Code to Embedded Hardware
As depicted in Figure 4, deploying an .m file to embedded hardware with the MathScript RT Module is as simple as dragging and dropping.
Figure 4. You can deploy an .m file to embedded hardware in LabVIEW.
You can add your .m files inline with LabVIEW graphical code using the MathScript Node.
Just like any LabVIEW VI, the LabVIEW project provides the interface for deploying your code to hardware. Simply save your VI and then drag it under the real-time target in your LabVIEW Project Explorer.
4. How to Validate Your .m File Code for Deterministic Execution
This section provides step-by-step instructions for validating your custom .m files for determinism. Consider the example in Figure 5:
Figure 5. This is an example application to test for jitter.
The example in Figure 5 is a simple script, but is a good demonstration of coding practices to maximize efficiency and minimize jitter. This application takes an input amplitude and produces a sine wave into a variable c based on the input amplitude.
To test this code on a real-time target, some benchmarking VIs need to be added to the block diagram. The graphical VIs that are part of the NI Real-Time Execution Trace Toolkit are used. The Real-Time Execution Trace Toolkit is an add-on for LabVIEW that is designed to interactively analyze and benchmark thread and VI execution, sending trace sessions back to the host computer.
Figure 6. You can add VIs from the Real-Time Execution Trace Toolkit to your application
to test your code for determinism and jitter.
The VIs highlighted in the Context Help are “user events” that are flagged in the trace. You can place these user events throughout your code to narrow down the traces to even specific functions.
In this case, the user events 1 and 2 are used to signify the start and end of the MathScript code.
Running this code results in a trace that looks like Figure 7:
Figure 7. This trace results from running the code in Figure 6.
All of the green flags in the trace signify a place in the code where the OS is “waiting on memory.” Requesting memory is the most common source of jitter in real-time applications. Hence, the goal is minimize the amount of these in the application.
To easily identify the caliper user events 1and 2, you can configure the color of those flags.
Figure 8. You can customize the color of user flags in the Real-Time Execution Trace Toolkit.
Now that the caliper flags are configured, you can better benchmark the MathScript code itself.
Just like the graphical VI to set a user flag in the trace, MathScript has a function call that is used to signify user events as well:
You can place these function calls throughout the .m file. Because the .m file in the example is rather short, you can place them in between each specific function call.
Figure 9. Use the rtloguserevent function call within the
MathScript Node to benchmark your .m file code.
Running this code on the embedded target profiles each specific line of the .m file.
Figure 10. This Real-Time Execution Trace results from running the code in Figure 9.
This trace has zoomed in to a specific set of flags to better determine the source of the jitter.
There is a repeated pattern of memory waits that occur between user flags 12 and 13.
The line of the .m file that this corresponds to is the following:
c(i) = b*sin(i*pi/24);
Examining this line of code reveals a problem when executing the text-based code in a real-time OS that isn’t necessarily a problem when running in a desktop OS.
The left side of the equal sign c(i) adds a new index to the variable c with each iteration of the For Loop. This resizing causes the compiler to request memory from the operating system, which introduces jitter into the application. Requesting memory is the primary cause of the “waiting on memory” delay previously discussed.
An easy way to remedy this is to allocate all of the needed memory outside of the For Loop.
Adding the following line before the For Loop requests all of the memory needed to house the entire variable c:
Note that this still requests memory, but it is on a one-time call, as opposed to resizing the array with each iteration of the loop.
Doing so results in the application in Figure 11.
Figure 11. Instantiating variables to their maximum size minimizes the amount of calls to memory.
Running this application results in the trace in Figure 12.
Figure 12. This Real-Time Execution Trace results from running the code in Figure 11.
This trace reveals only a scattered amount of green flags, which is the goal of tracing the application.
The MathScript RT Module is shipped with a set of guidelines that are designed to help you develop .m files for execution in real-time OSs. Located in the LabVIEW Help, the image in Figure 13 shows the location within the file.
Figure 13. The LabVIEW MathScript RT Module is shipped with a set of guidelines designed to help you develop .m files for real-time applications.
Exploring this help document shows the specific step that recommends the allocation of arrays outside of loops.
Figure 14. This specific section of the MathScript RT guidelines outlines the creation of arrays outside of a loop.
- At the conclusion of the benchmarking efforts, you are left with the final application shown in Figure 15.
Figure 15. This is the final application after benchmarking efforts.
Developing code for real-time applications places a lot of responsibility on your shoulders to benchmark your code and identify sources of jitter within your code. Often, it is a matter of the coding style that can eliminate most sources and leave you with a streamlined application that can run deterministically on embedded hardware. The LabVIEW MathScript RT Module provides hooks into the Real-Time Execution Trace Toolkit so that you can easily profile your .m files and identify sources of jitter.
MATLAB® is a registered trademark of The MathWorks, Inc.