There are a few technologies that address improving latency. The two main tools around this goal are the Real Time operating system and the FPGA.
Real Time Operating Systems
Any operating system will experience jitter, or variation in latency. Normal operating systems, such as Windows, are designed to run all programs as fairly as possible. In order to promote this fairness and time sharing, there is often jitter in a given program as the operating system cycles through running other processes. Jitter can take the form of pauses that last for a handful of milliseconds, or even be as large as tens of seconds. If in certain circumstances, the jitter can be practically unbounded. This jitter limits the usefulness of general purpose operating systems when it comes to executing a task with a high degree of regularity. That is because they are designed for fairness, not for repeatability, and a given task will have a wide variation in execution time.
A Real Time operating system (RTOS) is not designed for fairness but for repeatability, by limiting jitter. Examples of these include Pharlap, VXWorks, and RT Linux. The architecture of the operating system has the goal of making a given operation execute in a regular amount of time. That way, the time it takes to execute a task is very predictable and regular, with very little jitter.
With the extra effort to reduce jitter, there are operations on an RTOS which on average take longer to execute than on a general purpose operating system. However, the practical upper bound of the execution time can be much lower.
Figure 6: Simulated loop times for a control loop running on two operating systems. TOP: A general purpose OS has loop times that vary widely, with no practical upper bound (green line). BOTTOM: A Real-Time OS has a higher average (red line), but less jitter. This lets it be used dependably at higher rates.
General purpose operating systems are unfit to be applied in many applications with timing requirements, because of their high jitter. But with a Real Time operating system, a PC can be deployed into the situation where it otherwise could not be, since its execution time is more regular. Even though the average execution time is longer, the statistical spread is narrower and thus more predictable. The net effect is the increased capability of the PC architecture into applications where it could not otherwise be deployed.
While the RTOS increases the PC’s usefulness by limiting the variation in latency, it does not lower its average latency. In situations where a drastically lower latency is desired, a RTOS is insufficient.
FPGAs and Latency
There is another set of tools that have been developed to reduce latency. FPGAs, or Field-Programmable Gate Arrays, are made of a hardware “fabric” of elementary nodes. These nodes can be linked together by software to define their behavior, and easily reconfigured. The FPGA brings the benefits of extremely precise timing and reliability that hardware offers, as well as the ease of software development.
Putting our algorithm on the FPGA shortens the trip from signal to algorithm. On the FPGA the two major delays are the ADC sampling and the computation of the algorithm. In addition, the algorithm is much quicker because it lies in the hardware fabric of the FPGA instead of in a software execution system. It has the flexibility of software since no physical changes need to happen to the device to change its behavior, but has the speed of hardware, executing at clock rates up to (currently) the megahertz range.
Figure 7: FPGA architectures and the shorter trip to the algorithm (showing the optional use of buffers)
Buffers in FPGA applications are optional - and often are only included for computations that require them, such as FFTs. FPGA code without buffers is the architecture most often chosen for the lowest latency applications, where the result is needed as quickly as possible after the input is given. Applications like this tend to deliver a result as soon as possible for output, as in high speed control systems.
FPGAs and Throughput
Although not mentioned in the previous section, FPGAs are also a useful tool to achieve high throughput processing. High throughput processing is possible on an FPGA because the algorithms are executed in hardware. Often, preprocessing steps such as filtering or FFTs or signals are run on an FPGA, and the data is then buffered and passed to a host PC.
What if I want both high throughput and low latency?
It is worth noting that most changes that improve throughput often negatively affect latency. What increases latency can also negatively affect throughput. To achieve a high degree of both concurrently can be met, but usually only with dedicated hardware or FPGAs. The fortunate thing is that most applications need one or the other, and not a high degree of both.