# Image Filtering Overview

## Overview

You use image filtering to remove noise, sharpen contrast, or highlight contours in your images. This document will discuss the basic distinctions between types of filters and some of the uses for each. Two of the most common classifications of filters are based on their linearity and frequency response. A third classifier distinguishes filters that are applied spatially from frequency domain filters, which are applied to a Fourier-transformed representation of an image. Related links go into more detail on the most common filter types.

## Linearity

Linear filters, also known as convolution filters, are so named because you can represent them in linear algebra using a matrix multiplication (which is nothing more than a large number of simultaneous linear equations). According to the IMAQ Vision User Manual:

A convolution is a mathematical function that replaces each pixel by a weighted sum of its neighbors. The matrix defining the neighborhood of the pixel also specifies the weight assigned to each neighbor. This matrix is called the convolution kernel.

For each pixel P(i, j) in an image (where i and j represent the coordinates of the pixel), the convolution kernel is centered on P(i, j). Each pixel masked by the kernel is multiplied by the coefficient placed on top of it. P(i, j) becomes the sum of these products.

In the case of a 3 × 3 neighborhood, you can index the pixels surrounding P(i, j)and the coefficients of the kernel, K, as follows:

 P(i – 1, j – 1) P(i, j – 1) P(i+ 1, j – 1) K(i – 1, j – 1) K(i, j – 1) K(i+ 1, j – 1) P(i – 1, j) P(i, j) P(i+ 1, j) K(i – 1, j) K(i, j) K(i+ 1, j) P(i – 1, j + 1) P(i, j+ 1) P(i+ 1, j+ 1) K(i – 1, j + 1) K(i, j+ 1) K(i+ 1, j+ 1)

The pixel P(i, j) is given the value (1/N)S K(a, b)P(a, b), with a ranging from (i – 1) to (i + 1), and b ranging from (j – 1) to (j + 1). N is the normalization factor, equal to S K(a, b) or 1, whichever is greater.

Finally, if the new value P(i, j) is negative, it is set to 0. If the new value P(i, j) is greater than 255, it is set to 255 (in the case of 8-bit resolution).

Non-linear filters are any other filters that you cannot represent using a matrix formulation. Thresholding and equalization are typical non-linear operations. Other operations that are more commonly thought of as "filtering" include various edge detection (high-pass) operations and median filtering, which is a low-pass filter well-suited for the removal of speckle, or salt and pepper, noise from images.

Median and Nth Order Filtering

## Frequency Response

A fundamental way to characterize filters is by how they attenuate or amplify certain frequency ranges. In general, there are many different classes of frequency responses, but for images, the broad categories of low-pass or high-pass are sufficient. You use low-pass filters for operations like noise removal or image smoothing ("soft focus" is an example of a low-pass filtering operation). High-pass filters respond to abrupt changes in light intensity in an image, so you use them to enhance details in the image. The disadvantage of high-pass filters is that they tend to enhance high-frequency noise along with the image details of interest.

Note that you can use frequency response to classify both spatial and frequency filters. The next section explains these terms, which refer only to how the filter is implemented. You can describe all filters by some kind of frequency response.

## Spatial vs. Frequency Filters

Spatial filters alter pixel values with respect to variations in light intensity in their neighborhood, while frequency filters operate in the frequency domain on images that have been Fourier-transformed (via a DFT or FFT). After the filtering operation, the inverse transform is applied to get back to an enhanced version of the original image. You can also classify both spatial and frequency filters as linear or non-linear, and low-pass or high-pass.

Frequency filters have the advantage of being extremely easy to design and implement, but they can introduce artifacts into the image when the inverse transform is applied. These artifacts typically appear as "ringing" or ripples that emanate from edges in the image. If your application is sensitive to this phenomenon, you should use a spatial filter instead.