Debugging Multicore Applications with the Real-Time Execution Trace Toolkit

Publish Date: Feb 24, 2014 | 1 Ratings | 2.00 out of 5 |  PDF

Overview

This document is part of the
Multicore Programming Fundamentals Whitepaper Series

Multicore Programming Fundamentals Whitepaper Series

Multicore Programming Fundamentals Whitepaper Series

Debugging is viewed as one of the most time-consuming phase of software development. This white paper discusses the debugging of multicore applications.

Table of Contents

  1. Introduction
  2. Processor Affinity
  3. Shared Resources
  4. Why use the Real-Time Execution Trace Toolkit?
  5. Use of other debugging tools
  6. More Resources on Multicore Programming

1. Introduction

Multithreading has been natively supported by NI LabVIEW since the mid 1990s. With the introduction of multicore CPUs, developers can now take full advantage of the capabilities of this new technology with the help of LabVIEW. Parallel programming poses new challenges when developing applications for multicore CPUs such as synchronizing concurrent access to shared memory by multiple threads, and processor affinity. LabVIEW handles most multithreading tasks automatically while giving flexibility to users to assign threads to CPUs (or cores of the same CPU) of their choice.

If you are developing a real-time application, the best way to monitor detailed CPU usage and other events is by capturing execution traces from real-time targets. With the help of the Real-Time Execution Trace Toolkit you can view and analyze the execution traces of real-time tasks including Virtual Instruments (Vis) and Operating System (OS) threads. The Real-Time Execution Trace Toolkit is an add-on for LabVIEW Real-Time 7.1.1 and later and LabWindows/CVI Real-Time 8.5 and later. It consists of two parts: the Instrumentation VIs and the Trace Viewing Utility. The Instrumentation VIs need to be added around the code whose execution you would like to trace. The Trace Viewing Utility is used to view the captured execution trace.

Accessing low level execution information is critical for optimizing and debugging real-time applications because you can easily identify sources of jitter such as processor affinity, memory allocations, priority inheritance, or race conditions.

Using shared resources by multiple threads of different priority level can lead to unexpected behavior of the application. Table 1 shows some shared resources and the potential problems that may appear in an application.

 

Shared Resources

Potential Problems

  • LabVIEW Memory Manager
  • Non-reentrant shared subVIs
  • Global variables
  • File system
  • Priority Inversion
  • Unbounded Priority Inversion
  • Ruined determinism

Table 1.Shared resources and related potential problems.

In the next two paragraphs we will discuss how you can assign and trace processor affinity and how we can debug potential problems with shared resources.

Back to Top

2. Processor Affinity

The following two screenshots represent execution traces of a program which was run on an embedded multicore real-time target. Parallel programming in NI LabVIEW was implemented by using two Timed Loop (TL) structures. Each of the TLs was assigned to a different CPU. All code within a given TL will execute on the same CPU. Processor affinity is preserved until the program completes execution. Figures 1 and 2 clearly show the CPU affinity of each thread. Threads associated with a given CPU are highlighted while the rest of the threads are grayed out. Also note the parallel execution of threads running on separate CPUs.

Figure 1. This execution trace shows threads associated with CPU 0. The rest of the threads are grayed out.

Figure 2. This execution trace shows threads associated with CPU 1.

See also:
LabVIEW and Hyperthreading,
Multitasking in LabVIEW

Back to Top

3. Shared Resources

Using shared resources by a time-critical thread (a thread that needs to execute within a deterministic amount of time) can introduce extra jitter to your real-time application. One such shared resource is the LabVIEW Memory Manager. It is responsible for dynamically allocating memory.

When a normal priority program is in possession of the Memory Manager, the rest of the threads, including the time-critical thread, must wait for the shared resource to become available. In such cases jitter is inevitably introduced in the time-critical thread. To resolve this issue, the thread scheduler temporary boosts the normal-priority application to run in the time-critical thread so that it can finish up quicker and release the Memory Manager. This phenomenon is known as priority inheritance or priority inversion. To avoid such situations National Instruments advises to avoid using shared resources. A solution in the case of the Memory Manager would be to preallocate memory for arrays.

Priority inheritance can be sometimes enhanced when mixing two priority scheduling schemes – at the VI level and at the Timed Loop level. For example, in Figure 3 we can see a time-critical subVI (red icon) and a Timed Loop fighting for the same memory resources. The race for shared resources between the time-critical subVI and the Timed Loop can lead to priority inheritance.

Figure 3. Example of using two different priority assignment schemes.

Figure 4 is the trace of another program which has a normal priority subVI and a time-critical subVI sharing a common resource – the Memory Manager. The green flags show dynamic memory allocation, and the orange flag shows priority inheritance. The execution of the time-critical thread was interrupted so that the normal priority subVI can be boosted up to run in the time-critical thread, and therefore release the shared resource quicker. This eventually affects the determinism of the time-critical subVI.

Figure 4. The green flags show dynamic memory allocation(accessing the Memory Manager). The orange flag shows priority inheritance.

See also:
Preallocating Arrays for Deterministic Loops
Avoiding Shared Resources and Priority Inversions for Deterministic Applications
LabVIEW Real-Time Memory Management
Avoiding Contiguous Memory Conflicts (RT Module)

Back to Top

4. Why use the Real-Time Execution Trace Toolkit?

  • Debug run-time problems in single processor and multiprocessor applications
  • Identify shared resources and memory allocation
  • Verify expected timing behavior
  • Monitor CPU utilization and multicore interaction
  • Learn about the LabVIEW execution model

Back to Top

5. Use of other debugging tools

Along with the Real-Time Execution Trace Toolkit we strongly advise the use of other standard debugging tools to assure expected application performance.

VI-level debugging tools

  • Probes, breakpoints, execution highlighting
  • RT Debug String VI
  • User Code

System-level debugging tools

  • NI MAX or Distributed System Manager (On-Screen CPU Utilization)
  • VI Profiler

Back to Top

6. More Resources on Multicore Programming


Multicore Programming Fundamentals Whitepaper Series

Back to Top

Bookmark & Share


Ratings

Rate this document

Answered Your Question?
Yes No

Submit