Last Modified: October 6, 2016

Rounding in the fixed-point data type occurs when the precision of the input value or the result of an operation is greater than the precision of the output type of an operator.

When rounding occurs, LabVIEW coerces the value to a value with the same precision as the output type according to the rounding mode you select for the operator.

Many decimal (base 10) numbers are not exactly representable as binary (base 2) numbers. Likewise, the results of many arithmetic operations can only be represented exactly with long fractional parts or cannot be represented exactly at all. Because you necessarily define the fractional length of your fixed-point number or operator, you must designate a rounding mode when performing arithmetic with fixed-point data. The following table describes how LabVIEW behaves in each rounding mode.

Rounding Mode | Behavior |
---|---|

Round Half to Even |
Rounds the value to the nearest value that the output type can represent. If the value to round is exactly between two representable values, LabVIEW chooses the even value so that the least significant bit (LSB) after rounding is zero. This rounding mode has the largest impact on performance but produces the most accurate output values by neutralizing the bias towards higher values that occurs when you perform multiple rounding operations using the Round Half Up mode. For the Round Half to Even mode, LabVIEW defines any binary value with a least significant bit of zero as even, regardless of its decimal representation. Round Half to Even is the default rounding mode. |

Round Half Up |
Rounds the value to the nearest value that the output type can represent. If the value to round is exactly between two representable values, this mode rounds the value up to the higher of the two valid values, in the positive direction. LabVIEW adds half a least significant bit to the output value and then truncates the value. This rounding mode produces more accurate results but has a larger resource cost than Round Down (truncate). |

Round Down (truncate) |
Rounds the value down toward negative infinity to the nearest value that the output type can represent. LabVIEW discards the least significant bits of the value. This rounding mode has the best performance but produces the least accurate output values. |

The following image shows the To Fixed-Point node converting the floating-point value 2.5 to a U4 <4.0> fixed-point configuration.

The floating-point input in the previous image requires at least a U3 <2.1> fixed-point number to be represented exactly. This means that although the value 2.5 falls within the type max of the fixed-point operator in the previous image, the input value does not match the precision of the output type. LabVIEW must round the input to match the precision of the U4 <4.0> output type according to the rounding mode you select. The following examples show the effect of each rounding mode on the example input.

Rounding Mode | FXP Configuration | Input Value (Decimal) | Coerced Value (Binary) | Coerced Value (Decimal) | Behavior |
---|---|---|---|---|---|

Round Half to Even |
U4 <4.0> | 2.5 | 0010 | 2 | The input value falls exactly between two representable values, so LabVIEW rounds toward the even value with the LSB of 0. |

Round Half Up |
U4 <4.0> | 2.5 | 0011 | 3 | The input value falls half way between two representable values, so LabVIEW rounds up to the higher of the two representable values. |

Round Down (truncate) |
U4 <4.0> | 2.5 | 0010 | 2 | LabVIEW truncates the fractional length of the input value and coerces the input down to 2. |

The following image shows the To Fixed-Point node converting the floating-point value -2.5 to a I4 <4.0> fixed-point configuration.

The floating-point input in the previous image requires at least a I4 <3.1> fixed-point number to be represented exactly. This means that although the value -2.5 falls within the type max of the fixed-point operator in the previous image, the input value does not match the precision of the output type. LabVIEW must round the input to match the precision of the I4 <4.0> output type according to the rounding mode you select. The following examples show the effect of each rounding mode on the example input.

Rounding Mode | FXP Configuration | Input Value (Decimal) | Coerced Value (Binary) | Coerced Value (Decimal) | Behavior |
---|---|---|---|---|---|

Round Half to Even |
I4 <4.0> | -2.5 | 1110 | -2 | The input value falls exactly between two representable values, so LabVIEW rounds toward the even of the two values. |

Round Half Up |
I4 <4.0> | -2.5 | 1110 | -2 | The input value falls half way between two representable values, so LabVIEW rounds toward the higher value, in the positive direction. |

Round Down (truncate) |
I4 <4.0> | -2.5 | 1101 | -3 | LabVIEW truncates the fractional length of the input value and coerces the input down toward negative infinity, to the representable value -3. |

A *rounding bias* occurs when rounding errors accumulate in your code during a series of operations. The rounding mode you select can help you control rounding biases that result from multiple rounding operations.

The following image shows input-to-round, a 16-bit fixed-point constant, multiplied by another value slightly less than 1. The result of each iteration of the For Loop becomes the new input-to-round value.

In this code, each time the precision of the result of the multiplication operation exceeds the precision of the output type configuration for the Multiply node, LabVIEW rounds the result according to the specified rounding mode. The following table shows the outcome of this code in each rounding mode after 10 iterations.

Rounding Mode for Operator | Rounding Mode Result after 10 Iterations | Percent Error (from 64-bit calculation) | Behavior |
---|---|---|---|

Round Half to Even |
3.74566650390625 | +0.0065 % | This rounding mode accumulates the least amount of error through 10 iterations. |

Round Half Up |
3.7457275390625 | +0.0081 % | This rounding mode accumulates more error than Round Half to Even but has a smaller resource cost on hardware. |

Round Down (truncate) |
3.7451171875 | -0.0082 % | This rounding mode accumulates the most error through 10 iterations. This rounding mode produces a smaller result than the other rounding modes and a smaller result than an error-free calculation. |