1. The Need for Distributed Systems
You can often break down large systems into many different components and implement and treat each component’s hardware and software separately. To provide more computational power and I/O capacity, you can implement each component of the system on a different set of hardware.
For example, an airplane’s flaps, slats, rudder, engines, ailerons, and so on all need to be simulated and/or tested. You can separate this system into multiple pieces of hardware, as shown in Figure 1, to take advantage of a modular approach.
Figure 1. You can use multiple PXI systems to simulate components of an airplane.
[+] Enlarge Image
2. System-Level Integration Features
Using NI VeriStand, one or more operator (host) computers can communicate with one or more real-time execution targets with minimal configuration. NI VeriStand handles all of the communication between operator computers (hosts) and real-time execution targets. Figure 2 shows a simple topology involving one host and one target.
Figure 2. Simple Topology
The component of the host that communicates with the target is the NI VeriStand Gateway. This is handled automatically, but it is a key concept to understand larger topologies.
You can easily add targets to a topology inside the NI VeriStand System Explorer.
Figure 3. Add a target inside the System Explorer.
A single system definition file can contain an unbounded number of targets and even mix different target types.
Figure 4. A single system definition file can contain multiple targets.
Each target can have its own specific hardware and software configuration, and all targets can be deployed to and interacted with from a single gateway.
Figure 5. Multiple targets can be deployed to and interacted with from a single host.
Additional host computers can communicate with the same target topology by communicating with another host’s gateway.
Figure 6. Multiple Hosts and Multiple Targets Topology
To accomplish this, the additional hosts simply need to change the address of the NI VeriStand Gateway to be the remote host. The rest of the application remains the same.
3. Sharing Data Between Distributed Systems
To make a distributed system behave like a single system, sharing data between the component systems is required. This is a key element that gives all of the different pieces the ability to work together. This is commonly accomplished using reflective memory interfaces.
Reflective memory networks are real-time local area networks (LANs) in which each computer always has an up-to-date local copy of the shared memory set. These specialty networks are specifically designed to provide highly deterministic data communications. They deliver the tightly timed performance necessary for a variety of distributed simulation and industrial control applications. Reflective memory networks have benefited from advances in general-purpose data networks, but they remain an entirely independent technology, driven by different requirements and catering to applications for which determinism, implementation simplicity, and a lack of software overhead are key factors.1
Reflective memory gives NI VeriStand the ability to share data between multiple targets while meeting the performance and determinism requirement of the entire system. Using reflective memory, you can split up a simulation model to execute on different target systems simultaneously. The input and output values are shared between the individual systems over reflective memory. GE Intelligent Platforms reflective memory boards are natively supported in NI VeriStand 2010 and later. Many components of NI VeriStand can use reflective memory to help you seamlessly create a multitarget system.
Additionally, NI VeriStand automatically distributes data between targets for various uses. For example, you can configure a stimulus (test) profile to run on Target A that references data on Target B. NI VeriStand automatically creates and enables a link between the targets to get the data. This is done automatically with no explicit configuration by the user.
Figure 7 shows an example system with reflective memory cards.
Figure 7. Multiple Chassis With Reflective Memory
4. Synchronizing a Distributed System
It is important to think about timing and synchronization requirements when designing a system. If the distributed hardware is not synchronized, the sampling of inputs and outputs does not happen simultaneously. Also, over time, drift can cause one component of the system to collect more samples than another even if they are configured for the same rate. If simulation is your goal, this can cause problems. For example, one flap simulation could be in a different time state than the other. Also, data logging and analysis can be corrupted as a result of the data not being from the same moment in time.
An overview of Synchronization Basics covers many of the details such as clock drift and clock skew.
Synchronizing a distributed system involves hardware synchronization and software synchronization. Optionally, you can synchronize the entire system to an external time reference such as GPS or IRIG.
Hardware synchronization means each piece of hardware in the system shares a hardware reference clock for timing and a start trigger for beginning I/O tasks. Each piece of hardware in the system derives its own clocks from the same hardware reference clock, and each piece of hardware starts at the same time.
Examples of common hardware timing and synchronization tasks include simultaneously sampling on several data acquisition boards, updating the PWM duty cycle on the digital output of a field-programmable gate array (FPGA) board while updating data acquisition analog outputs, handshaking between a digital multimeter (DMM) and switch, phase-lock looping a waveform generator with a digitizer, or synchronizing an RF downconverter with an intermediate frequency (IF) digitizer.
You can create an NI VeriStand distributed system with an NI PXI chassis. PCI eXtensions for Instrumentation (PXI) is a rugged PC-based platform that offers a high-performance, low-cost deployment solution for measurement and automation systems. PXI combines the Peripheral Component Interconnect (PCI) electrical bus with the rugged, modular Eurocard mechanical packaging of CompactPCI and adds specialized synchronization buses and key software features.
The chassis contains the high-performance PXI backplane, which includes the PCI bus and timing and triggering buses. PXI modular instrumentation adds a dedicated 10 MHz system reference clock, PXI trigger bus, star trigger bus, and slot-to-slot local bus to address the need for advanced timing, synchronization, and sideband communication while not losing any PCI advantages.
The easiest way to share a reference clock between PXI chassis is with the CLK10 BNC connections on the rear of the chassis. Almost all modern PXI chassis have these BNC terminals. Each chassis has a CLK10 out connection and a CLK10 in connection. By connecting the CLK10 out of one chassis to the CLK10 in of another chassis, you can ensure you are using the same reference clock.
To share a start trigger, a National Instruments DAQ device is recommended. One chassis can export a trigger for one or many other chassis to use as a start trigger.
You can see an example hardware synchronization configuration in Figure 8. In this configuration, an NI PXI-1042 master chassis exports its CLK10 as a time reference to N other PXI chassis with a BNC cable. All chassis import an external start trigger. You can learn more about multichassis synchronization by reading Advanced Timing and Synchronization System Design.
Figure 8. Hardware Synchronization of Multiple Chassis
[+] Enlarge Image
NI VeriStand handles all of the synchronization of hardware within one chassis automatically, and you can choose from several options for exporting and importing sample clocks and triggers to other targets.
After adding DAQ devices to a system configuration in the NI VeriStand System Explorer, you can see in Figure 9 that one of the DAQ device names is bold. NI VeriStand has automatically chosen this device to be the master DAQ device for that chassis. The master DAQ device accepts an external trigger to enable multitarget synchronization. NI VeriStand synchronizes nonmaster DAQ devices to the master DAQ device within this single chassis, and they are not involved in multichassis synchronization.
Figure 9. The device in bold has been selected as the master DAQ device.
You can customize the master DAQ device selection as well as triggering on the chassis page. Select chassis on the tree. You should see the page shown in Figure 10. The important sections for hardware synchronization of multiple chassis have been highlighted.
Figure 10. Chassis Importing a Trigger on PFI 6
In Figure 10, the chassis is configured to import a trigger to Dev1 on PFI 6. Consult the hardware manual for the Dev1 device to find the terminal for PFI 6.
After creating these configurations and wiring the BNC and trigger lines, you can deploy them to the real-time execution targets running NI VeriStand to provide hardware synchronization.
If the chassis you are using does not have CLK10 BNC connections or if you require even greater synchronization performance, you can use an NI 665x timing and synchronization module to perform this same function. If you decide to use an NI 665x module, make sure each system configuration has the “10MHz PLL” timing and sync device added and configured to either export or import the 10 MHz clock.
Software synchronization means several pieces of code in the system (in this case, the NI VeriStand Real-Time Engine) share a common clock for execution and a start trigger to begin execution at the same time.
The NI VeriStand Real-Time Engine is designed to use hardware-timed single-point I/O (HWTSPIO) when the appropriate hardware devices are available. HWTSPIO is a feature of DAQ hardware and software that allows locking software execution to physical hardware clocks. The locking of software to hardware is available only for analog input, so a PXI system configuration must have at least one analog input channel even if it is not used.
Therefore, if the hardware is synchronized as described above, and an analog in channel is present in each configuration, each target’s NI VeriStand Real-Time Engine software is automatically synchronized.
Synchronizing to a Time Reference
In some cases, system components must be synchronized not only to each other but also to an external time reference. Because the above approach to synchronizing the components of the system involves a master target that shares its clock and trigger signal with the rest of the distributed system, you can achieve time reference synchronization simply synchronizing the master target to the external time reference.
The Clock 10 Discipline Add-On for NI VeriStand gives the NI VeriStand Engine the ability to synchronize to an external time reference. The external time reference can be any of the supported references for an NI PXI-6682 module. The add-on uses a combination of the PXI-6682 and another timing and synchronization device to discipline the PXI chassis to the external time reference. You can find more details on the page for the add-on.
Figure 11 shows example distributed system components that are synchronized to each other and to an external time reference.
Figure 11. Hardware Synchronization of Multiple Chassis With External Time Reference
[+] Enlarge Image
With NI VeriStand, you can configure real-time I/O, stimulus profiles, data logging, alarming, and other tasks; implement control algorithms or system simulations by importing models from a variety of software environments; build test system interfaces quickly with a run-time editable user interface complete with ready-to-use tools; and add custom functionality using NI LabVIEW, NI TestStand, ANSI C/C++, .NET, Python, and other software environments.
By taking advantage of NI VeriStand, you can easily create a distributed HIL, test cell, real-time test, or monitoring system using the out-of-the-box multitarget features.
6. Additional Information