Introduction to PXImc - Technology for High Performance Test, Measurement & Control Applications

Publish Date: Aug 30, 2013 | 7 Ratings | 5.00 out of 5 |  PDF

Overview

The PXI MultiComputing (PXImc) specification, announced by the PXI Systems Alliance, allows two or more intelligent systems to exchange data via PCI Express. The specification defines the hardware and software requirements to connect two or more intelligent systems using PCI Express-based interfaces through a non-transparent bridge (NTB).

Because of the excellent data transfer characteristics of PCI Express, PXImc enables PXI systems to transfer data at multi-gigabytes per second of practical data throughput with only a few microseconds of latency. This capability can help lower test times for automated production test systems that are computationally bound. It can also be used in applications such as real-time tests (or hardware-in-the-loop tests) and structural tests that need a large number of distributed PXI systems to share data with low latency.

PXImc uses low-cost, off-the-shelf hardware technology and provides an exceptional combination of performance and value, especially when compared to alternative solutions that use custom interfaces.

This whitepaper explores the technical details of PXImc and examines the capabilities of NI products and the specific use cases they address.

Table of Contents

  1. How PXImc Works
  2. NI PXIe-8383mc PXImc Adapter Module
  3. NI-PXImc Software Driver
  4. Possible Topologies Using the NI PXIe-8383mc
  5. NI PXIe-8383mc Link Performance
  6. Additional Resources

1. How PXImc Works

PXImc provides a high-bandwidth, low-latency communications model by using PCI Express as the physical communication layer. Two independent systems with their own PCI Express root complexes cannot be directly connected via PCI Express because of various contentions between the two PCI domains such as bus ownership and endpoint resource allocations.

 

Using an NTB helps address these contentions by logically separating the two PCI domains while providing a mechanism for translating certain PCI transactions in one PCI domain into corresponding transactions in another PCI domain.

 

Figure 1 illustrates this concept. Both systems A and B have complete control of resource allocation within their own domains, and the presence of the NTB does not affect either system’s resource allocation algorithm. 

 


Figure 1. Two Systems Connected via PCI Express Using a Non-Transparent Bridge (NTB)

 

The NTB responds to resource requests from the PCI root complex similar to the other PCI endpoints in the system by requesting some amount of system resources such as memory-mapped I/O (MMIO) space and interrupts. As resource allocation occurs on both systems A and B, the NTB acquires resources in both PCI domains. Once the OS loads the PXImc driver on both systems, the NTB enables a memory mechanism to transfer data between the two systems. Any data written in the MMIO space for the NTB in System A is transferred in the PXImc driver’s allocated memory in System B and vice versa.

 


Figure 2. Communication Mechanism Between Two PCI Domains by Using NTBs

 

The PXImc specification defines precise hardware and software component requirements; thus, it provides a standardized protocol for communicating between two systems using PCI Express. From the hardware standpoint, several issues were resolved to allow two independent systems to communicate directly over PCI Express. From the software perspective, a communications scheme was created to allow each system to discover and configure its own resources to communicate with the other system.

 

Back to Top

2. NI PXIe-8383mc PXImc Adapter Module

The NI PXIe-8383mc is the industry’s first PXImc module. It uses a x8 PCI Express 2.0 link to provide up to 2.7 GB/s of data throughput and 5 µs of one-way latency. You can use this module to connect two PXI Express chassis—each with its own system controller—or a PXI Express chassis with a system controller to an external computer. The NI PXIe-8384 module and the NI PCIe-8381 board are the two MXI-Express adapters that the NI PXIe-8383mc interfaces to via a Cabled PCI Express cable. Figures 3 and 4 illustrate the configurations that the NI PXIe-8383mc enables. 

 

Figure 3. The NI PXIe-8383mc PXImc adapter module interfaces to the NI PXIe-8384 MXI-Express module to enable configurations with multiple PXI chassis.

 

 

Figure 4. The NI PXIe-8383mc interfaces to the NI PCIe-8381 MXI-Express adapter for attaching additional PCs to PXI systems.

 

Back to Top

3. NI-PXImc Software Driver

You need the NI-PXImc driver to use the NI PXIe-8383mc module. The NI-PXImc driver abstracts the low-level data transfer protocol and presents a simple and highly efficient API for building a solution based on PXImc. Figure 3 shows the basic function calls you need to write and read data on a PXImc link. In the simplest form, there’s one writer session on a system transferring data to a reader session on another system.

 

Figure 3. Functional Calls for the NI-PXImc Driver

 

The NI-PXImc driver supports multiple concurrent data transfer sessions per NI PXIe-8383mc module. As an example, you can use one session to exchange command and control information between the two systems and another session to transfer actual data. The NI-PXImc driver also supports multiple NI PXIe-8383mc modules per system.

 

Back to Top

4. Possible Topologies Using the NI PXIe-8383mc

The NI PXIe-8383mc, NI PXIe-8384, and NI PCIe-8381 together offer a set of flexible interfaces to create numerous system topologies. Figures 4 & 5 represent some a basic tree topology. Other topologies such as line and ring are possible but National Instruments recommends using the tree topology because it provides the ideal balance of performance, scalability, and usability. 

 

Figure 4. Tree Topology With Multiple PXI Systems

 

Figure 5. Tree Topology With a PXI System and Multiple Attached PCs 

 

The NI-PXImc driver currently only supports point-to-point communication. In other words, only systems that are directly connected to each other can communicate. In the tree topology illustrated above, System B can only directly communicate with System A and not System C. 

 

The PXI chassis that contains the PXIe-8383mc module must be powered on before the system connected to the front panel connector is powered on. This is required as to allow the system attached via the front panel of the PXIe-8383mc to recognize the presence of the NTB and allocate it the required resources. Because of this requirement, all systems in a particular configuration need to powered-on in a particular sequence. 

 

Back to Top

5. NI PXIe-8383mc Link Performance

The NI PXIe-8383mc uses a x8 PCI Express 2.0 link to provide up to 2.7 GB/s of data throughput and 5 µs of one-way latency. Figure 8 shows how the PCI Express switch with the NTB is wired to the backplane connector and to the front panel connector. 

 

Figure 6. Side View of the NI PXIe-8383mc Highlighting the PCI Express Switch With the NTB and Its Connectivity to the Backplane and the Front Panel Connectors

 

It is important to note that the bandwidth available to the NI PXIe-8383mc from the PXI chassis and controller is going to determine the actual throughput. PXI chassis and controllers offer different amounts of PCI Express bandwidth ranging from x8 PCI Express 2.0 to x1 PCI Express 1.0. An NI PXIe-1085 chassis combined with an NI PXIe-8135 embedded controller provides the maximum bandwidth to the NI PXIe-8383mc and are the recommended chassis-controller combination. Table 2 lists the bandwidth expected over the NI PXIe-8383mc link based on the maximum slot bandwidth. Note that the PCI Express link type does not impact the latency performance of the PXImc link. 

 

Currently National Instruments support the use of NI PXIe-8383mc in all NI PXI Express chassis that contains either the NI PXIe-8135 embedded controller or the PXIe-PCIe8381 remote controllers as its system controller. 

 

Maximum Slot Bandwidth

Expected Bandwidth Over NI PXIe-8383mc

x8 PCI Express 2.0 

(NI PXIe-1085 chassis)

2.7 GB/s

x4 PCI Express 2.0

(NI PXIe-1082 chassis)

1.35 GB/s

x4 PCI Express 1.0

(NI PXIe-1075, PXIe-1065, PXIe-1062, PXIe-1071 Chassis)

675 MB/s

x1 PCI Express 1.0

(NI PXIe-1078 Chassis)

168 MB/s

Table 1. Expected Bandwidth Over the NI PXIe-8383mc Link Based on Maximum Slot Bandwidth

 

Back to Top

6. Additional Resources

To learn more about the NI PXIe-8383mc PXImc Adapter Module or the PXImc specification please refer to the following resources.

 

Back to Top

Bookmark & Share


Ratings

Rate this document

Answered Your Question?
Yes No

Submit