Multicore versus Multiprocessor
The main difference between multicore systems and multiprocessor systems, which have been available for many years, is that multicore systems include a single physical processor that contains two or more cores while multiprocessor systems include two or more physical processors. Multicore systems also share computing resources that are often duplicated in multiprocessor systems, such as the L2 cache and front-side bus. Multicore systems provide similar performance to multiprocessor systems, but often at a significantly a lower cost. The reason is that a multicore processor does not cost as much as multiple equivalent individual processors, and a motherboard with support for multiple processors, such as multiple processor sockets, is not required.
Multicore systems, like multiprocessor systems, can simultaneously execute multiple computing tasks. This is advantageous in multitasking OSs, such as Windows XP, in which you simultaneously run multiple applications. Multitasking refers to the ability of the OS to quickly switch between tasks, giving the appearance of simultaneous execution of those tasks. When running on a multicore system, multitasking OSs can truly execute multiple tasks simultaneously, as opposed to only appearing to do so. For example, on a dual-core system, two applications – such as National Instruments LabVIEW and Microsoft Excel – each can access a separate processor core at the same time, thus improving overall performance for applications such as data logging.
Figure 1. Dual-core systems enable multitasking operating systems, such as Windows XP,
to truly execute two tasks simultaneously.
Multithreading extends the idea of multitasking into applications so you can subdivide specific operations within a single application into individual threads, each of which can run in parallel. Then, the OS can divide processing time not only among different applications, but also among each thread within an application. In a multithreaded NI LabVIEW program, an example application might be divided into three threads – a user interface thread, a data acquisition thread, and an analysis thread. You can assign a priority to each of these, and each operates independently. Thus, in multithreaded applications, multiple tasks can progress in parallel along with other applications that are running on the system.
Applications that take advantage of multithreading provide numerous benefits, including more efficient CPU use, better system reliability, and improved performance on multicore systems.
More Efficient CPU Use
In many applications, you make synchronous calls to resources, such as instruments. Such calls often take a long time to complete. In a single-threaded application, a synchronous call effectively blocks, or prevents, any other task within the application from executing until the operation completes. Multithreading prevents this blocking. While the synchronous call runs on one thread, other parts of the program that do not depend on this call run on different threads. Execution of the application progresses instead of stalling until the synchronous call completes. In this way, a multithreaded application maximizes the efficiency of the CPU because it does not idle if any thread of the application is ready to run.
Better System Reliability
By separating an application into different execution threads, you can prevent secondary operations from adversely affecting those that are the most important. The most common example is the effect that the user interface can have on more time-critical operations. Many times, screen updates or responses to user events can decrease the execution speed of an application. By giving the user interface thread a lower priority than other more time-critical operations, you can ensure that the user interface operations do not prevent the CPU from executing more important operations, such as acquiring data or process control.
Improved Performance on Multicore Systems
One of the most compelling benefits of multithreading is that you can harness the full computing power of multicore systems. In a multithreaded application in which several threads are ready to run simultaneously, each core can run a different thread, and the application attains true parallel task execution. This not only enhances the previously discussed benefits of more efficient CPU use and better system reliability, but also purely increases performance.
The Graphical Programming Advantage
By definition, virtual instrumentation helps you take advantage of each innovation in the PC industry. Multicore processing is no different. When developing software that fully takes advantage of the computing power of multicore processors, you need a development tool that inherently provides parallelism. Because of their sequential nature, text-based programming languages, such as C and C++, require you to call functions to programmatically spawn and manage threads. It is also often difficult to visualize how various sections of code run in parallel because of the sequential, line-by-line syntax of text-based languages.
In contrast, graphical programming environments such as LabVIEW can easily represent parallel processes because data flow is inherently parallel. It is considerably easier to visualize the parallel execution of code in a graphical environment, in which two parallel execution paths of graphical code reside side by side. LabVIEW code is also inherently multithreaded. LabVIEW recognizes opportunities for multithreading in programs, and the execution system handles multithreading implementation and communications for you. For example, two independent loops running without any dependencies automatically execute in separate threads. When you execute LabVIEW code on a multicore system, the multiple threads run on the multiple processor cores without any intervention on your part.
Figure 2. The parallel nature of graphical programming in LabVIEW automatically implements multithreading, and the threads run on multicore processor cores without any intervention on your part.
PXI – A Multicore Platform
One platform that readily embraces multicore processing is PXI – an open, multivendor, PC-based platform for test, measurement, and control. You can remotely control PXI systems from any dual-core desktop or laptop PC, and National Instruments has released the industry’s first embedded and rack-mount dual-core PXI controllers. The National Instruments PXI-8105 embedded controller employs the 2.0 GHz Intel Core Duo processor T2500, and the National Instruments PXI-8351 rack-mount controller includes the 3.0 GHz Intel Pentium D processor 830. Benchmarks in LabVIEW 8 demonstrate a performance improvement for single-threaded applications of up to 25 percent between the NI PXI-8105 and the single-core National Instruments PXI-8196 (2.0 GHz Intel Pentium M processor 760), which have equivalent processor clock rates. This improvement is a result of numerous enhancements in the processor and chipset between these two generations of Intel architectures. The performance improvement resulting from the fact that the PXI-8105 processor is dual core can be seen in the multithreaded application benchmarks, which demonstrate an improvement of up to 100 percent compared to the NI PXI-8196 embedded controller.
Figure 3. Dual-core PXI systems demonstrate up to a 100 percent performance improvement for multithreaded applications.
Choosing LabVIEW for Multicore Systems
Multicore processing offers many advantages for both multitasking environments and multithreaded applications. Because graphical dataflow programming is inherently parallel and LabVIEW code is inherently multithreaded, LabVIEW has provided and continues to provide considerable advantages in developing applications that take advantage of multicore systems, such as PXI.
PXI Product Manager
Access more information on multithreading in LabVIEW.
This article originally appeared in the July 11, 2006 issue of NI News.