A DSP48 slice is a digital signal processing element that performs addition and multiplication operations. DSP48 inference is the process by which the compiler recognizes a pattern of addition and multiplication operations in code and implements those operations on DSP48 slices. Using a DSP48 inference pattern in your algorithm allows the compiler to infer DSP48 slices from your algorithm and obtain efficient use of the available slices.
A 25-bit by 18-bit signed multiplication operation is central to efficient DSP48 inference.
The following image displays the inference pattern and the maximum bitwidths for each operation.
There is no saturation or overflow logic in the DSP48 slice, so you must set the rounding mode accordingly.
If your inputs combine integer and fractional bits, LabVIEW attempts to align the binary points to fit the inferred operation on a single DSP48 slice. However, if the binary points do not align, it is more likely that the results will exceed the physical limitations of the hardware and result in increased utilization of DSP48 slices.