Prior to the NI LabVIEW 8.5 software release, native LabVIEW FPGA intellectual property (IP) functions were limited to integer implementations. However, the introduction of the fixed-point data type to the LabVIEW platform in LabVIEW 8.5 makes it easier to develop IP blocks with decimal accuracy. This article provides an overview of fixed-point processing and offers additional resources that describe how to use the fixed-point data type in IP for LabVIEW FPGA.

Fixed-Point Part 2 - Working With Fixed-Point in LabVIEW >>

### 1. What Is “Fixed Point?”

**
** Fixed point is a format for representing numbers on digital processing devices. It is a data type used by a programming language or hardware descriptive language (HDL) to determine how to interpret bits in a memory location. As an example, examine the contents of a 32-bit memory location. Depending on the data type specified for this memory location, it represents different information to the application using it.

### 2. Fixed-Point Representation

The first important concept to understand is that floating point and fixed point are two distinct representations of numerical values even though they both include a decimal place. Look specifically at how floating-point numbers and fixed-point numbers are interpreted. A floating-point number has three parts: a mantissa, exponent, and sign bit. The mantissa contains the decimal number in scientific notation scaled to the power specified by the exponent.

A fixed-point number has two parts: an integer (which may contain a sign bit) and a fraction. The integer and fractional parts represent the portion of the number before and after the decimal point, respectively. In fixed-point representations, this point between the integer and fraction is called the radix point.

When configuring a fixed-point data type, you specify a word length (total number of bits for the fixed-point representation) and the integer length (number of bits in the integer portion of the fixed-point representation). The leftover bits are for the fraction, such that the integer plus the fraction equals the word length. The most common fixed-point formats use 16- and 32-bit word-length representations. This is a result of the typical register sizes for microprocessing platforms. However, for custom digital implementations such as field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs), it is useful and common to use fixed-point data types of other sizes. Because you can use the LabVIEW programming language to program both microprocessor and FPGA targets, you can set the word length for a fixed-point number to anything from 1 to 64 bits.

Selecting the word length and integer length for a fixed-point data type fixes the location of the radix point and, as a result, the weighting of every bit in the data type.

With the radix point fixed, you can calculate the decimal representation of the fixed-point number by converting the integer and fractional portions of the number from binary to decimal representation, as illustrated below.

This brings to light a key difference in the fixed-point and floating-point data types. Using fixed point as opposed to floating point limits the precision and range of the number that can be represented by that data type. Imagine all the floating-point numbers are plotted as the number line below. Because of the difference in range and precision, fixed-point numbers using the same number of bits are a small subset of possibilities distributed across the line (grey hash marks).

When converting a floating-point number or algorithm to a fixed-point representation, it is possible (and likely) that the data (red dots above) will not match one of the fixed-point grid locations (grey hashes). There are two possibilities when mapping floating-point data to fixed-point representations. When the floating-point data is out of range – below the minimum or above the maximum – an overflow/underflow condition occurs. When the data is in the range but does not exactly match one of the valid fixed-point values, you must round the floating-point number.

There are two modes for handling overflow/underflow conditions: saturation and wrap. In the saturation mode, the floating-point data is coerced directly to the minimum/maximum fixed-point value regardless of how far it has exceeded the range of the fixed-point data type. Alternatively, wrap mode wraps to the minimum value when it exceeds the maximum and wraps to the maximum value when it exceeds the minimum.

Additionally, there are three rounding modes you can use when the number you want to convert does not align with one of the valid fixed-point values: truncate, round half up, and round half even. Truncate mode always selects the left neighboring grid, essentially chopping the least significant bits. Round half up and round half even both select the closer neighboring grid as the output. But when x is exactly between two fixed-point locations, round half up picks the right one (rounding up) and round half even picks the one whose LSB is zero (rounding up or down). Use round half even to render a better statistical distribution because always rounding up when the data is directly in the middle causes a slight statistical upward bias to the converted data.

**Truncate
**

**Round Half Up
**

**Round Half Even
**

### 3. Why Use Fixed-Point Data Types?

**
** If the fixed-point data type has an inferior range and precision compared to floating point, why use fixed-point numbers? The most common reason is because the selected processing platform does not support floating-point arithmetic or cannot process floating-point numbers efficiently. FPGA is one good example. While it is possible to implement floating-point processing on an FPGA, it is not speed-efficient and can significantly limit the amount of logic that you can place on the FPGA. To better understand this trade-off, examine how floating-point math is implemented on a binary device.

In this example, add two floating-point numbers whose mantissa has a range of 1 to 10. To implement a floating-point addition on a binary processing device, you must complete the following steps. First, align the exponents of the two numbers. Here the addend is shifted to 0.02×10^{5}. Next, add the two numbers together to get the intermediate result as 10.01×10^{5}. Finally, shift the mantissa back to fit within the range.

The two shifts are called dynamic shifts because the number of bits to shift is decided at run time. Those dynamic shifts have a big impact on the resource utilization of an FPGA. Below is a comparison of a 32-bit floating-point addition implemented in both fixed-point and floating-point data types. The simple addition floating-point implementation uses 10 percent of a 1M gate FPGA, while the fixed-point addition only costs 0.2 percent.

In many processing platforms, there are specialized devices integrated into the device to more efficiently handle floating-point processing. However, these devices may have drawbacks such as cost, lack of parallel processing, or speeds and timing that prevent them from being the best platform for a given application. In these cases, the fixed-point representation allows efficient processing while maintaining an acceptable level of algorithm performance.

### 4. Using Fixed-Point IP in LabVIEW FPGA

FPGAs are a digital processing platform that can offer true parallelism, providing greater performance and timing for applications compared to microprocessor systems. However, most FPGA platforms do not natively support floating-point processing; thus, fixed-point implementations are often necessary to achieve the required algorithm accuracy and performance. With the graphical programming language of the LabVIEW FPGA Module, you can program National Instruments FPGA-based reconfigurable I/O (RIO) platforms, including an out-of-the-box embedded architecture consisting of a real-time microprocessor connected to an FPGA with modular I/O devices, to interface to any signal or protocol.

The LabVIEW FPGA Module includes many fixed-point and integer-based IP blocks that you can easily adapt to function with the fixed-point data type. Part 2 of this article examines the fixed-point capabilities of the LabVIEW FPGA Module as well as the ready-to-use IP functions available with the module. If you would like to learn more about some of these features now, visit the "Resources" section below.

### 5. Resources

Download the High-Throughput Fixed-Point Math Library from LabVIEW FPGA IPNet

Download the FFT from LabVIEW FPGA IPNet

Caveats and Recommendations for Using Fixed-Point Numbers

How Can I Transfer My Fixed-Point Data Using a FIFO or Memory in LabVIEW 8.5.x?

Using the Fixed-Point Data Type in LabVIEW FPGA

*IP Corner addresses issues and presents technical information on LabVIEW FPGA application reusable functionality, also known as FPGA IP. This article series is designed for those interested in learning, testing, or discussing topics to make FPGA designs better and faster through the reuse of IP.*