The Automated Test Outlook is a comprehensive view of key technologies and methodologies impacting the test and measurement industry. View the other 2011 trends from National Instruments.
3. Heterogeneous Computing
Automated test systems have always comprised multiple types of instruments, each best suited to different measurement tasks. An oscilloscope, for example, can make a single DC voltage-level measurement, but a DMM provides better accuracy and resolution. It is this mix of different instrumentation that enables tests to be conducted in the most efficient and cost-effective manner possible. This same trend is now affecting how engineers implement computation in test systems. Applications such as RF spectrum monitoring, for example, require inline, custom signal processing and analysis not possible using a standard PC CPU. Furthermore, test systems are generating an unprecedented amount of data that can no longer be analyzed using a single processing unit. To address these needs, engineers have to turn to heterogeneous computing architectures to distribute processing and analysis.
A heterogeneous computing architecture is a system that distributes data, processing, and program execution among different computing nodes that are each best suited to specific computational tasks. For example, an RF test system that uses heterogeneous computing may have a CPU controlling program execution with an FPGA performing inline demodulation and a GPU performing pattern matching before storing all the results on a remote server. Test engineers need to determine how to best use these computing nodes and architect systems to optimize processing and data transfer.
A Next-Generation Test System Using a Heterogeneous Computing Architecture to Distribute Processing
The following are the most common computing nodes used in test systems:
The central processing unit (CPU) is a general-purpose processor with a robust instruction set and cache as well as direct access to memory. Sequential in its execution, the CPU is especially suited to program execution and can be adapted to almost any processing activities. Advancements in the last decade have led to multiple computing cores on a single chip, with most processors running two to four cores with many more cores planned for the future. These multicore systems enable operations to occur in parallel but require the programmer to implement a multithreaded application with an eye toward parallelization to fully take advantage of these systems’ capabilities.
The graphics processing unit (GPU) is a specialized processor originally developed for the rendering of 2D and 3D computer graphics. The GPU has seen tremendous advances due to the need for more realistic graphics in computer video games. It achieves its performance by implementing a highly parallel architecture of hundreds to thousands of cores specifically suited to vector and shader transforms. Engineers are trying to adapt these specialized processing cores for use in general-purpose processing. Performance gains have already been seen with the use of GPUs in the areas of image processing and spectral monitoring.
Field-programmable gate arrays (FPGAs), unlike CPUs and GPUs, do not have defined instruction sets or processing capabilities. Instead, they are reprogrammable silicon of logic gates that allows users to build custom processors to meet their exact needs. They also provide a hardware-timed execution speed that enables a high level of determinism and reliability that makes them especially suited for inline signal processing and system control. This increased performance, however, comes with the trade-off of increased programming complexity and the inability to change processing functionality in the middle of program execution.
Cloud computing is not a specific type of processor but a collection of computing resources accessible via the Internet. The power of cloud computing is that it frees users from having to purchase, maintain, and upgrade their own computing resources. Instead they can rent just the processing time and storage space necessary for their applications. Cloud computing use has grown rapidly, with HP predicting that 76 percent of businesses will pursue some form of it within the next two years. However, while it does provide access to some of the most powerful computers in the world, cloud computing has the drawback of very high latency. Data must be transferred over the Internet, making it difficult to impossible to use in test systems that require deterministic processing capabilities. But cloud computing is still well-suited for offline analysis and data storage.
Heterogeneous computing provides new and powerful computing architectures, but it also introduces additional complexities in test system development – the most prevalent being the need to learn a different programming paradigm for each type of computing node. For instance, to fully use a GPU, programmers must modify their algorithms to massively parallelize their data and translate the algorithm math to graphics-rendering functions. With FPGAs, it often requires the knowledge of hardware description languages like VHDL to configure specific processing capabilities.
"Next-generation test systems will increasingly use FPGAs, along with other processing elements, to efficiently distribute processing and analysis. System design software will be crucial in the abstraction and management of these systems."
- Vin Ratford, Senior Vice President, Xilinx Corporation
Engineers in the industry are working on a way to abstract the complexities of specific computing nodes. In the case of GPUs, they are developing the Open Computing Language (OpenCL). OpenCL is a programming interface designed to support not only multiple GPU vendor products but also additional parallel processors like multicore CPUs. Work is also under way to further simplify the configuration of FPGAs. “High-level synthesis” is an emerging process adopted by some vendors to use high-level, algorithmic-based languages in FPGA programming. Tools like the NI LabVIEW FPGA Module are abstracting away even further the complexities by enabling graphical, block diagrams to be converted directly into digital logic circuitry.
Computing node programming is not the only challenge in a heterogeneous computing system. Having more computing resources is not valuable if the data cannot be transferred and acted upon rapidly. PCI Express has emerged as the premier data bus for these peer-to-peer networks in test systems due to its high-throughput, low-latency, and point-to-point characteristics. As the backbone of PXI, the PXI Systems Alliance recently released the new PXI MultiComputing specification to guarantee PCI Express, peer-to-peer, heterogeneous computing capabilities between multiple vendors.
Heterogeneous computing will enable many new possibilities in test system development. By taking advantage of the latest advancements in programming abstraction and data transfer, engineers can truly benefit from using multiple computing nodes.