This tutorial assumes that you are familiar with the NI Real-Time Hypervisor. For a review of this information, visit the Benefits of NI Real-Time Hypervisor Systems paper.
1. Factory Preinstallation
NI Real-Time Hypervisor for Windows systems are typically ordered through the PXI Advisor or by talking with an NI Sales representative. These systems are preinstalled with all necessary OS and hypervisor software so that you can get up and running as quickly as possible.
2. Step 1: Configuring Resource Partitioning Between OSs
When using a PXI or industrial controller system with the NI Real-Time Hypervisor installed, it is necessary to partition I/O devices, CPU cores*, and RAM between Windows XP and LabVIEW Real-Time. These resources are partitioned rather than shared for performance reasons.
The first step required to perform this partitioning is booting into Windows-only via the boot menu at startup (you can select between booting Windows XP or booting into the hypervisor to boot Windows XP and LabVIEW Real-Time in parallel). When booted into Windows XP only, the Windows OS will be able to detect all underlying hardware resources in the system.
Next, to actually perform the partitioning you can use the Real-Time Hypervisor Manager utility that is preinstalled on NI Real-Time Hypervisor systems. After opening the utility from Start >> Program Files >> National Instruments, you will see a dialog that looks like this:
Figure 1. You can partition I/O devices, CPU cores, and RAM between Windows XP and LabVIEW Real-Time using the NI Real-Time Hypervisor Manager utility.
The total usable RAM can be divided between OSs by double-clicking the Memory item in the list. In addition, up to 95 MB of RAM can be allocated for inter-OS communication; this is referred to as Shared Memory*. Likewise, each CPU core can be assigned to Windows XP or LabVIEW Real-Time using the drop-down controls on the right side of the utility. For example, you can assign 2 cores to LabVIEW Real-Time and 2 to Windows XP, or 3 to LabVIEW Real-Time and just one to Windows XP if desired. Cache information is also shown to aid in CPU core partitioning decisions.
Finally, I/O devices including network interfaces, modular instruments, and more are also shown in the utility and can be assigned to LabVIEW Real-Time or Windows XP using the drop-down menu on the right. Note that GPIB interfaces onboard a given controller may be assignable to Windows XP only.
In most cases, the settings on the Basic tab can be used to configure partitioning of resources on a Real-Time Hypervisor system quickly and easily. The Advanced tab is available, however, for partitioning lower-level resources, viewing information about interrupt line usage, and performing troubleshooting in the case of interrupt conflicts.
After I/O devices, RAM, and CPU cores are partitioned as desired, these settings can be saved by pressing the Apply arrow in the upper-right corner of the utility. The Apply button does not immediately partition the resources -- instead it saves the configuration information for use upon reboot of the system. In addition, after hitting the Apply button, a window will open up in your web browser with instructions for booting into the Real-Time Hypervisor.
Figure 2. After saving a Real-Time Hypervisor Manager configuration using the Apply button, an instruction window will open with next-steps for booting into the Real-Time Hypervisor.
It is important to note that due to interrupt line limitations, certain I/O modules may need to be physically moved in a PXI or PXIe chassis after shutting down the system. The first and most important piece of information that the instruction window contains is directions for moving the I/O modules (if needed). A table is also shown at the bottom showing the correct slots for each I/O module. Because of these interrupt line limitations, it is also possible that your desired I/O to OS assignments will not be possible; in this case the Apply arrow will be broken in the Real-Time Hypervisor Manager utility and you will be prompted to change one or more OS assignments.
To ensure that your desired I/O to OS assignments will work prior to ordering, the National Instruments PXI Advisor allows you to input an OS to assign each I/O module to when ordering a hypervisor system (you will be notified if your assignments are invalid). When you order a complete Real-Time Hypervisor system from NI, all I/O modules will be shipped assigned to your desired OS and in the appropriate chassis slot. If you need to place certain modules in certain slots (e.g. Timing and Synchronization modules, VSAs, VSGs, etc), please contact National Instruments prior to ordering to verify that your desired slot placement and I/O assignments will be possible.
The remainder of the information given in the instruction window explains how to boot into the Real-Time Hypervisor, communicate between OSs, and configure the LabVIEW Real-Time target; these items will be discussed throughout the remainder of this tutorial. The Real-Time Hypervisor Manager utility can be re-run at any time to adjust resource partitioning.
* CPU cores are only user-assignable to OSs when using Real-Time Hypervisor 2.0 and higher versions. In addition, Shared Memory is only available when using Real-Time Hypervisor 2.0 and above.
3. Step 2: Booting into the Real-Time Hypervisor
After shutting down the Real-Time Hypervisor system and moving any I/O modules as instructed, the next step is to boot into the Real-Time Hypervisor (rather than Windows-only as in the previous step). NI Real-Time Hypervisor systems are triple-boot systems: it is possible to boot into Windows-only (via boot menu), LabVIEW Real-Time-only (via BIOS settings), or Windows XP and LabVIEW Real-Time simultaneously (via boot menu) on a Real-Time Hypervisor controller.
To boot into the Real-Time Hypervisor and use the settings from Step 1 above, the "NI Real-Time Hypervisor" item should be selected from the boot menu.
Figure 3. You can boot into the NI Real-Time Hypervisor by selecting the corresponding item from the boot menu on hypervisor systems. Settings last applied by the Real-Time Hypervisor Manager will be used when booting.
4. Step 3: Accessing Real-Time Target Settings in MAX
When booting is complete, only the Windows XP OS will appear on screen. However, LabVIEW Real-Time will also be running in parallel on the same controller (there is no way to detect this from the screen alone). To verify that the LabVIEW Real-Time OS is booted and accessible, you can use NI Measurement & Automation Explorer (MAX) as with traditional remote LabVIEW Real-Time systems.
The real-time side of the Real-Time Hypervisor system should appear under the Remote Systems category, provided that the Windows XP and LabVIEW Real-Time sides of the hypervisor controller are connected on the same network subnet. By default, NI Real-Time Hypervisor systems are setup to make use of a Virtual Ethernet connection to eliminate the need for physical Ethernet cabling between two network interfaces on the same controller. You can read more about the Virtual Ethernet connection in the next section.
Figure 4. When booted into the Real-Time Hypervisor, you can configure the LabVIEW Real-Time side of the system just like a typical remote LabVIEW Real-Time system from Measurement & Automation Explorer (MAX)
In MAX, you can view the status of the LabVIEW Real-Time OS, change networking settings, install software on the LabVIEW Real-Time target, and more -- just as with a remote LabVIEW Real-Time system. You can also reboot the LabVIEW Real-Time side of the controller at any time without rebooting Windows XP. Note that the opposite is not possible; if Windows XP needs to reboot, you must reboot the entire system to do so. If Windows XP encounters an error, LabVIEW Real-Time will continue to run applications deterministically but lose disk access. To restart Windows XP and restore disk access for LabVIEW Real-Time, the entire system must be rebooted.
Keep in mind that since resources such as I/O modules, RAM, and CPU cores are partitioned between OSs in Real-Time Hypervisor systems, each OS will only see the devices that it "owns" when the system is running in hypervisor mode. For example, if you assign 2 cores to Windows XP on a quad-core controller, then after booting into the Real-Time Hypervisor only 2 cores will appear in the Task Manager Performance window.
5. Step 4: Examining Inter-OS Communication Options
For the most part, once a Real-Time Hypervisor system has been booted into the hypervisor, it acts just like a traditional LabVIEW Real-Time system and remote Windows host. However, there are a few additional features available on hypervisor systems to assist with communicating between OSs on the same controller (without using a physical cable).
Virtual Console Connection
You can view console output from the LabVIEW Real-Time side of a hypervisor system in Windows using a built-in Virtual Console connection. This connection is an emulated serial port that is implemented in software but looks to Windows XP as if it were a physical COM port (COM4). To connect to the Virtual Console, you can use the Windows HyperTerminal application by clicking Start >> Run and then typing in "hypertrm" and pressing the enter key.
After connecting to the COM4 port with a baud rate of 115200 (specified in the Real-Time Hypervisor Manager instruction page), you can view debug output from LabVIEW Real-Time applications and boot information during LabVIEW Real-Time restarts.
Figure 5. The Virtual Console (COM4) connection enables viewing LabVIEW Real-Time boot information and application debug output from Windows XP.
Note that it can be difficult to use the Virtual Console connection to view LabVIEW Real-Time output during boot of the entire system, because the serial buffers are small and most of the boot output will be complete by the time you can open the HyperTerminal connection. To help with this task, there is an option in the Real-Time Hypervisor Manager menu that can force LabVIEW Real-Time to wait to boot until the HyperTerminal window is opened and a key is pressed.
To provide inter-OS data communication without wires, a Virtual Ethernet connection is also provided on Real-Time Hypervisor systems. The Virtual Ethernet connection is an emulated connection that is implemented in software and appears to both LabVIEW Real-Time and Windows XP as a typical physical NIC. You can use the this connection to connect to the LabVIEW Real-Time side of the system from Windows XP, or transmit data between LabVIEW and LabVIEW Real-Time applications using standard methods like Shared Variables or TCP VIs.
In Windows XP, the Network Connections window in the Control Panel will show two or more items when the Virtual Ethernet connection is enabled. In other words, you will see each physical NIC in the system plus one more emulated interface that is not physically present. The Virtual Ethernet NIC acts just like a physical NIC in Windows XP, and you can configure its IP address, subnet settings, etc as with a typical physical adapter (note that the connection may necessarily be named "Virtual Ethernet" by default).
Figure 6. You can use the Virtual Ethernet connection to configure the LabVIEW Real-Time side of a hypervisor system from Windows (on the same controller), or pass data between OSs using standard methods such as Shared Variables or TCP VIs.
The Virtual Ethernet connection will also appear as a NIC under LabVIEW Real-Time, and can be configured from Measurement & Automation Explorer. Note that the Virtual Ethernet adapter is set to primary by default on LabVIEW Real-Time if enabled. For communication to work between Windows XP and LabVIEW Real-Time, the Windows XP Virtual Ethernet NIC and the LabVIEW Real-Time Virtual Ethernet NIC must be set to the same subnet. It is also possible to assign another physical NIC as primary if desired to configure the LabVIEW Real-Time side of a hypervisor system or deploy applications from another machine.
Shared Memory (Real-Time Hypervisor 2.0 and above)
For applications that require a high throughput of data between Windows XP and LabVIEW Real-Time, a block of inter-OS Shared Memory can also be reserved using the NI Real-Time Hypervisor Manager utility (up to 95 MB). This memory can be accessed via LabVIEW VIs in LabVIEW for Windows or LabVIEW Real-Time applications, and via C applications as well. A theoretical maximum of 600 MB/s data transfer can be achieved when using Shared Memory, with actual rates depending on data sizes, cache architecture and core assignments, and application structure.
Figure 7. You can transfer data at rates up to 600 MB/s (max) using inter-OS Shared Memory in Real-Time Hypervisor systems.
The Shared Memory API is made up of low-level read and write functions, as well as some synchronization functions (e.g. triggers and mutexes) to help you implement a data transfer scheme. Example code including Shared Memory access is also included in the LabVIEW Example Finder.
6. Step 5: Deploying LabVIEW Real-Time Applications
Deploying and running LabVIEW Real-Time applications on a Real-Time Hypervisor system is identical to working with a stand-alone LabVIEW Real-Time target and remote Windows host (except both OSs are running on the same controller). You can choose to develop LabVIEW Real-Time applications on the Windows XP side of a hypervisor controller, and then deploy those applications to the real-time side using Virtual Ethernet. Alternately, you can develop LabVIEW Real-Time applications remotely and deploy them to the real-time side of a hypervisor system via physical Ethernet (with Virtual Ethernet disabled).
Figure 8. When working with a Real-Time Hypervisor system, you can deploy applications to the LabVIEW Real-Time side of the system locally using Virtual Ethernet, or remotely via a physical Ethernet connection.
After a LabVIEW Real-Time application is deployed to the real-time side of a Real-Time Hypervisor system, it can communicate with other LabVIEW for Windows applications (such as a user interface) running on the same system using Virtual Ethernet or Shared Memory. In addition, LabVIEW Real-Time applications on hypervisor systems can communicate with remote systems via any physical NICs assigned to LabVIEW Real-Time.
7. Backing Up, Restoring, and Replicating Real-Time Hypervisor Systems
Entire NI Real-Time Hypervisor systems can be backed up using an imaging utility that supports backing up boot sector and partition information as well as data. After purchasing an NI Real-Time Hypervisor system, you can use the instructions in this document to back up, restore, or replicate a system with the NI Real-Time Hypervisor installed: Best Practice for Backing Up, Restoring, and Replicating NI Real-Time Hypervisor Systems.