Table Of Contents

DSP48 Inference Pattern in Optimized FPGA VIs

Last Modified: March 30, 2016

A DSP48 slice is a digital signal processing element that performs addition and multiplication operations. DSP48 inference is the process by which the compiler recognizes a pattern of addition and multiplication operations in code and implements those operations on DSP48 slices. Using a DSP48 inference pattern in your algorithm allows the compiler to infer DSP48 slices from your algorithm and obtain efficient use of the available slices.

DSP48 Inference Pattern

A 25-bit by 18-bit signed multiplication operation is central to efficient DSP48 inference.

The following image displays the inference pattern and the maximum bitwidths for each operation.

  1. An optional 25-bit addition operation before the 25-bit input to the multiplication operation.
  2. The output of the multiplication operation is 43 bits (full-precision) and is sign extended to 48 bits for the final output of the pattern.
  3. An optional 48-bit addition operation after the multiply operation.

Setting a Rounding Mode for DSP48 Slices

There is no saturation or overflow logic in the DSP48 slice, so you must set the rounding mode accordingly.

Set each node in the inference pattern to Wrap and Round Down (truncate) in the Fixed-Point Configurator dialog box.

Troubleshooting DSP48 Slice Usage

If your inputs combine integer and fractional bits, LabVIEW attempts to align the binary points to fit the inferred operation on a single DSP48 slice. However, if the binary points do not align, it is more likely that the results will exceed the physical limitations of the hardware and result in increased utilization of DSP48 slices.

To ensure appropriate bitwidths and decreased DSP48 utilization, make sure all inputs for the DSP48 inference pattern have the same integer and fractional lengths.

Recently Viewed Topics