NI Real-Time Hypervisor Architecture and Performance Details

Publish Date: Dec 10, 2014 | 4 Ratings | 4.75 out of 5 |  PDF

Overview

With NI Real-Time Hypervisor software, you can run LabVIEW Real-Time and Windows XP OSs simultaneously on a single multicore controller. This paper explores how the NI Real-Time Hypervisor works, explains I/O access in detail, and examines the performance impact of the NI Real-Time Hypervisor on real-time applications. In addition, it provides several recommendations to optimize the performance of applications on NI Real-Time Hypervisor systems.

Table of Contents

  1. NI Real-Time Hypervisor Overview
  2. Introduction to Virtualization Technology
  3. Architecture of NI Real-Time Hypervisor Software
  4. Interrupt Routing Details
  5. Interrupt Latency and Performance
  6. Inter-OS Communication and Benchmarks
  7. Additional Resources and Ordering Information

1. NI Real-Time Hypervisor Overview

The NI Real-Time Hypervisor is a software package that enables running both LabVIEW Real-Time and either Windows XP or Red Hat Enterprise Linux OSs simultaneously on a single multicore PXI or industrial controller (there are two distinct versions of the hypervisor software). After you install the NI Real-Time Hypervisor on a controller, you can work within the general-purpose OS environment and run Windows or Linux applications while deploying LabVIEW Real-Time applications to the same controller. Note that the determinism of real-time applications is maintained on NI Real-Time Hypervisor systems.

To learn more about the advantages of the NI Real-Time Hypervisor, view the Benefits of Real-Time Hypervisor Systems white paper. In addition, see the NI Real-Time Hypervisor for Windows Walkthrough or NI Real-Time Hypervisor for Linux Walkthrough tutorials to gain insight on how to use a system with the NI Real-Time Hypervisor installed.

The NI Real-Time Hypervisor uses virtualization technology to help you run two OSs in parallel on a single computer. Read the following section to learn more about virtualization and the main types of virtualization software.

Back to Top

2. Introduction to Virtualization Technology

By itself, virtualization is a term that stands for the abstraction of computer resources. In practice, virtualization technology enables running multiple OSs in parallel on the same computing hardware. While this technology has been used for years in the IT domain, engineers are increasingly taking advantage of virtualization to reduce the hardware requirements of their applications as well.

Virtualization of a given system is performed by a piece of software called a virtual machine monitor (VMM) or hypervisor. These terms are often used interchangeably. Individual OS instances running on a hypervisor are referred to as virtual machines (VMs). Essentially, hypervisor software is responsible for managing access to I/O devices (including those shared between OSs), facilitating inter-OS communication, and, in some cases, scheduling virtual machines (when running on shared CPUs).

There are two main categories of virtualization software: hosted and bare-metal. Hosted VMMs run on top of a "host" OS and rely on it for scheduling and I/O access. In contrast, bare-metal hypervisors interact directly with computer hardware and do not rely on a host OS. Bare-metal software packages are well-suited for engineering applications because they can be specially designed to support running real-time OSs in virtual machines. In addition, bare-metal virtualization software allows individual VMs to access I/O devices using their native drivers.

Figure 1. While hosted virtualization software runs on top of a host OS, bare-metal software runs directly on the underlying computer hardware.

For in-depth information on virtualization technology as well as underlying techniques that software packages use to accomplish virtualization, view the Virtualization Technology Under the Hood white paper. The remainder of this document addresses details specific to the NI Real-Time Hypervisor.

Back to Top

3. Architecture of NI Real-Time Hypervisor Software

The NI Real-Time Hypervisor is a bare-metal virtualization software package that runs LabVIEW Real-Time and Windows XP or Red Hat Enterprise Linux in parallel as virtual machines. Note that these are actually two separate software packages: the NI Real-Time Hypervisor for Windows, and the NI Real-Time Hypervisor for Linux Early Access Program. When installed on a supported controller, NI Real-Time Hypervisor software partitions the CPU cores in the system according to a user configuration. The NI Real-Time Hypervisor is based on low-level VirtualLogix VLX virtualization software.

Figure 2. NI Real-Time Hypervisor for Windows software runs Windows XP on one or more processor cores and LabVIEW Real-Time on the remaining cores (a version is also available for Red Hat Enterprise Linux).

When either LabVIEW Real-Time or the host OS attempts to access a shared resource or communicate with the other OS, the hypervisor is automatically called by special virtualization features built into the processor. Specifically, NI Real-Time Hypervisor hardware uses multicore Intel processors with integrated Intel-VT technology. 

For performance reasons, the NI Real-Time Hypervisor partitions I/O modules and RAM between OSs (in addition to CPU cores). You can enter desired OS assignments for each of the I/O modules in your chassis and indicate the desired division of RAM using a built-in utility called the NI Real-Time Hypervisor Manager. In some cases, the NI Real-Time Hypervisor Manager may request that you physically move your I/O modules to different chassis slots to avoid interrupt conflicts. To learn more about interrupt routing and why this is necessary, read the following section.

Figure 3. With the NI Real-Time Hypervisor Manager utility, you can assign individual I/O devices, RAM, and CPU cores to either LabVIEW Real-Time or the host OS.

Because each hypervisor call results in some performance overhead (processor state must be backed up and restored), the Real-Time Hypervisor is called only when necessary. For example, if a LabVIEW Real-Time application sends data to a partitioned I/O module, then the hypervisor does not need to be called.

Back to Top

4. Interrupt Routing Details

Both PXI and industrial controller chassis have four PCI interrupt request (PIRQ) lines in the backplane that are defined by the PCI specification. These lines are used to route interrupt signals from individual I/O modules in the chassis to the controller. While all four IRQ lines are available for access in every chassis slot, they are connected differently to the pins in each slot. See Figure 4 for an illustration of this routing.

Figure 4. Most PCI-based devices make use of only one physical interrupt line. Different I/O slots are connected to the four backplane interrupt lines (PIRQ A-D) in different ways.

Essentially, wiring each chassis slot slightly differently helps to spread multiple I/O modules across physical interrupt lines for the best performance. Most I/O modules use just one interrupt line, so wiring the four backplane interrupt lines in the same way to each slot requires many devices to share just one line. Note that some I/O modules with bridges may use more than one interrupt line (such as MXI interfaces) or nonstandard interrupt lines (such as PXI Ethernet modules).

In Real-Time Hypervisor systems, each of the four physical interrupt lines must be assigned to either the general-purpose host OS or LabVIEW Real-Time. Therefore, all I/O devices that make use of a certain physical interrupt line are assigned to the same OS. You can choose which I/O devices are assigned to LabVIEW Real-Time and the general-purpose OS, respectively, using the built in Real-Time Hypervisor Manager utility. In some cases, the Real-Time Hypervisor Manager prompts you to physically move your I/O modules to different slots in order to make your desired I/O-to-OS assignments work with the interrupt line routing in the backplane (the utility automatically detects this).

Note that some I/O to OS configurations are not supported; the number of modules assigned to each OS is constrained by the physical interrupt lines. 

If you choose an invalid configuration (one that is not possible with any combination of the four interrupt lines assigned to LabVIEW Real-Time or the host OS), you can use the Real-Time Hypervisor Manager to assign one or more modules to different OSs. In addition, if you have certain I/O modules that you need to place in certain chassis slots, you can use the Advanced tab in the Real-Time Hypervisor Manager to manually dictate module placement and troubleshoot any conflicts that occur. If you need to ensure that your I/O modules work in certain chassis slots before ordering, please contact NI support.

The NI Real-Time Hypervisor for Windows is sold only online as part of complete PXI and industrial controller systems. When you configure a system with the Real-Time Hypervisor for Windows installed via the PXI Advisor, you can assign each I/O module that you order to LabVIEW Real-Time or Windows XP to validate your desired configuration before shipping. This is also helpful when planning Real-Time Hypervisor for Linux systems, although Real-Time Hypervisor for Linux software must be manually installed (no factory preinstallation is currently available).

Figure 5. With the online PXI Advisor, you can validate your desired I/O-to-OS assignments before ordering a Real-Time Hypervisor system.

Note: There are several additional complexities associated with interrupt routing in Real-Time Hypervisor systems that are not mentioned in this tutorial. For an in-depth discussion of these complexities, visit the paper: In-Depth: Understanding NI Real-Time Hypervisor I/O to OS Assignments.

Back to Top

5. Interrupt Latency and Performance

The major factor affecting the performance of Real-Time Hypervisor systems is the frequency of interrupts. In short, each time an interrupt is received by the controller, the hypervisor must be called to route the interrupt to the correct OS. This adds some latency to interrupts on Real-Time Hypervisor systems and means that the maximum deterministic loop rate of LabVIEW Real-Time applications may be lower on hypervisor systems than traditional real-time only systems.

Figure 6. In Real-Time Hypervisor systems, the hypervisor must route incoming interrupts to the appropriate OS.

Some worst-case benchmarks are shown in Table 2 for a simple single-point data acquisition operation in LabVIEW Real-Time (using interrupts). In this benchmark application, a single data point is acquired from a data acquisition device, a PID algorithm is run on the data, and then the resulting signal is generated. The application is run at as high of a loop rate as possible while maintaining deterministic performance. Results for both a hypervisor system (PXI-8110) and non-hypervisor system (PXI-8110 using only three cores) are shown in Table 2.

Table 2. These benchmarks show the maximum deterministic loop rate of a simple single-point I/O application in LabVIEW Real-Time. Because this application uses a high frequency of interrupts, performance is affected on hypervisor systems.

As expected, in this case the maximum deterministic loop rate on a hypervisor system is lower than on a dedicated real-time target because the single-point benchmark application uses a very high frequency of interrupts. Additional interrupts were also generated on Windows XP while running this benchmark to simulate a worst-case scenario.

While the previous benchmark illustrates that the Real-Time Hypervisor can have a large performance impact on high-loop-rate applications that use interrupts, most typical real-time applications (below 5 kHz) running on the Real-Time Hypervisor are able to run deterministically at the desired loop rate. In other words, when running at loop rates below 5 kHz, the additional latency added to interrupts should not be significant enough to affect determinism. Higher deterministic loop rates may be possible depending on the application.

Using polling instead of interrupts can also help maximize performance when running applications on real-time hypervisor systems. For example, the benchmark shown in Table 3 is for a large, more than 40-channel data acquisition application (running on LabVIEW Real-Time). This application uses polling; note how the performance impact when running on a hypervisor system versus a traditional real-time only system is small.

Table 3. These benchmarks show a typical large data acquisition application (40+ channels) using polling. The maximum deterministic loop rate is only slightly affected on Real-Time Hypervisor systems.

Back to Top

6. Inter-OS Communication and Benchmarks

You can use two physical Ethernet connections (one assigned to LabVIEW Real-Time and one assigned to Windows XP or Red Hat Enterprise Linux) to communicate between OSs on a Real-Time Hypervisor system. In addition, the Real-Time Hypervisor provides a built-in virtual Ethernet connection that is implemented in software and emulated in each OS. Essentially, the virtual Ethernet connection appears in the general-purpose host OS just as a typical physical adapter (it has an IP address as usual). Identically, the LabVIEW Real-Time side of the virtual Ethernet adapter can be configured from the NI Measurement & Automation Explorer (MAX) configuration utility.

Note that the virtual Ethernet connection only allows communication between LabVIEW Real-Time and the host OS on hypervisor systems; it does not provide connectivity to outside networks. It is recommended that you install an additional Ethernet module on PXI systems to communicate with outside networks from both LabVIEW Real-Time and Windows XP. One alternative to this solution is to bridge together a physical and virtual Ethernet connection in the host OS. However, this is not compatible with some corporate IT networks.

Furthermore, a virtual console connect (COM4 or ttyS3) is provided on Real-Time Hypervisor systems. You can use the virtual console to view debug output from the LabVIEW Real-Time side of a hypervisor controller. This is useful for obtaining the IP address of LabVIEW Real-Time at boot-up or for viewing debug strings sent from LabVIEW Real-Time applications. 

Because both physical and virtual Ethernet connections use interrupts, they introduce performance overhead on Real-Time Hypervisor systems (see the Interrupt Latency and Performance section). In addition, sending data via virtual Ethernet is typically slightly slower than physical Ethernet when running on the hypervisor due to the need to compute checksums in software.

Below are some benchmarks comparing the throughput of Ethernet communication on both hypervisor and non-hypervisor systems.

Figure 7. Because Ethernet communication uses interrupts, throughput is affected on Real-Time Hypervisor systems. The virtual Ethernet connection on hypervisor systems must compute checksums in software, and is typically slightly slower than physical Ethernet connections.

In summary, both physical and virtual Ethernet throughput is decreased on Real-Time Hypervisor systems. The virtual Ethernet connection is designed to provide any easy communication mechanism between LabVIEW Real-Time and a host OS on the same controller using standard methods like TCP/IP and shared variables. One way to improve communication throughput on any multicore LabVIEW Real-Time controller is to restrict the background processes (communication and so on) to only one core while using the other cores to execute timed loops. Experimentally, this has resulted in a throughput improvement as high as three times in some tests.

Finally, in version 2.0 and higher of NI Real-Time Hypervisor software, you can allocate up to 95 MB of shared RAM for high-throughput data transfer between LabVIEW Real-Time and the host OS. The maximum throughput of this data transfer has been benchmarked at 600 MB/s, and APIs for accessing the shared memory are available for both LabVIEW and C.

Back to Top

7. Additional Resources and Ordering Information

>> Configure a Real-Time Hypervisor for Windows system using the online PXI Advisor

>> Contact an NI representative to order NI Real-Time Hypervisor for Linux software, or to request additional information

>> View a walkthrough of NI Real-Time Hypervisor for Windows software

>> Explore a walkthrough of NI Real-Time Hypervisor for Linux software

>> Investigate hardware and software supported in NI Real-Time Hypervisor systems

Back to Top

Bookmark & Share


Ratings

Rate this document

Answered Your Question?
Yes No

Submit