# Convert Algorithms from Floating-Point to Fixed-Point in LabVIEW Communications

Publish Date: Mar 02, 2018 | 1 Ratings | 5.00 out of 5 | Print | Submit your review

## Overview

Researchers looking to validate whether their ideas are practically relevant typically need to convert their algorithms from floating-point representations to fixed-point implementations. This conversion is driven largely by cost and performance – fixed-point algorithms can run on lower cost parts with low power consumption, and when inventing a new algorithm, researchers are keen to show its viability for commercial implementation and hence need to prove their algorithm’s efficacy in fixed-point. Unfortunately, the task of converting a design from floating-point to fixed-point isn’t trivial – it requires careful tradeoffs in choosing the appropriate fixed-point data types and in defining the overflow and rounding strategies of each operation to ensure that the fixed-point design maintains sufficient fidelity to the original floating-point design while still adhering to the performance constraints of the system.

To provide more insight into the tradeoff, it helps to understand how floating-point and fixed-point representations operate. In floating-point, the decimal point used to represent a real number can be dynamically adjusted to provide the best possible precision to represent its value. Traditional desktop processors commonly use this representation as typical applications generally benefit from the increased precision at the expense of power and computational complexity. When optimizing a design for mass deployment, designers unfortunately can’t rely on the ease of use of floating-point, and need to convert designs to fixed-point.

To achieve a certain level of performance at lower cost and with lower power consumption, developers often rely on fixed-point implementations. As such, these implementations should ideally be tested with real world signals in a hardware prototype with FPGAs to assess the viability of a given design for deployment. In fixed-point, the decimal point is fixed such that there are a fixed number of digits before and after the binary point. Fixed-point math on FPGAs can be performed exactly like integer math but with the benefit of tracking the position of the binary point for easier comparison with the floating-point counterpart. There’s a tradeoff for accuracy, but fixed-point math on FPGA uses significantly fewer resources and clock cycles than floating-point math.

However, converting floating-point algorithms to fixed-point algorithms is not a straightforward process. It is often a consideration of tradeoffs between accuracy of algorithm outputs and the algorithm’s performance on the FPGA. This process is frequently difficult and tedious, and involves characterizing the behavior of algorithms numerically using extensive simulations.

LabVIEW Communications eliminates the tedium of converting floating-point algorithms to fixed-point algorithms by providing tools to aid the designer in the process of conversion. Developers can iteratively perform the conversion in a data-driven process by defining a conversion criteria, and making modifications to the fixed-point data types based on the histogram of the data and feedback on the error at every node compared to the initial floating-point represe ntation. Furthermore, with LabVIEW Communications, researchers can mass edit the data types and propagation behavior of operators or choose to specify settings and behavior on a per-operator basis.

LabVIEW Communications enables rapid prototyping of communications designs by allowing designers to transition from models developed in floating-point to fixed-point more quickly and with substantially more insight. Through the data driven, iterative conversion process, designers can easily analyze the tradeoff between accuracy and resource utilization and prepare their ideas for deployment to prototype hardware faster than any other alternative.