Spatial Filtering
- Updated2025-11-25
- 19 minute(s) read
Filters are divided into two types: linear (also called convolution) and nonlinear.
A convolution is an algorithm that consists of recalculating the value of a pixel based on its own pixel value and the pixel values of its neighbors weighted by the coefficients of a convolution kernel. The sum of this calculation is divided by the sum of the elements in the kernel to obtain a new pixel value. The size of the convolution kernel does not have a theoretical limit and can be either square or rectangular (3 × 3, 5 × 5, 5 × 7, 9 × 3, 127 × 127, and so on).
Convolutions are divided into four families:
- Gradient
- Laplacian
- Smoothing
- Gaussian
This grouping is determined by the convolution kernel contents or the weight assigned to each pixel, which depends on the geographical position of that pixel in relation to the central kernel pixel.
Vision features a set of standard convolution kernels for each family and for the usual sizes (3 × 3, 5 × 5, and 7 × 7). You also can create your own kernels and choose what to put into them. The size of the user-defined kernel is virtually unlimited. With this capability, you can create filters with specific characteristics.
When to Use
Spatial filters serve a variety of purposes, such as detecting edges along a specific direction, contouring patterns, reducing noise, and detail outlining or smoothing. Filters smooth, sharpen, transform, and remove noise from an image so that you can extract the information you need.
Nonlinear filters either extract the contours (edge detection) or remove the isolated pixels. NI Vision has six different methods you can use for contour extraction (Differentiation, Gradient, Prewitt, Roberts, Sigma, or Sobel). The Canny Edge Detection filter is a specialized edge detection method that locates edges accurately, even under low signal-to-noise conditions in an image.
To harmonize pixel values, choose between two filters, each of which uses a different method: NthOrder and LowPass. These functions require that either a kernel size and order number or percentage is specified on input.
Spatial filters alter pixel values with respect to variations in light intensity in their neighborhood. The neighborhood of a pixel is defined by the size of a matrix, or mask, centered on the pixel itself. These filters can be sensitive to the presence or absence of light-intensity variations.
Spatial filters fall into two categories:
- Highpass filters emphasize significant variations of the light intensity usually found at the boundary of objects. Highpass frequency filters help isolate abruptly varying patterns that correspond to sharp edges, details, and noise.
- Lowpass filters attenuate variations of the light intensity. Lowpass frequency filters help emphasize gradually varying patterns such as objects and the background. They have the tendency to smooth images by eliminating details and blurring edges.
Concepts
The following table describes the different types of spatial filters.
| Filter Type | Filters |
|---|---|
|
Linear Highpass |
Gradient, Laplacian |
|
Linear Lowpass |
Smoothing, Gaussian |
|
Nonlinear Highpass |
Gradient, Roberts, Sobel, Prewitt, Differentiation, Sigma |
|
Nonlinear Lowpass |
Median, Nth Order, Lowpass |
Linear Filters
A linear filter replaces each pixel by a weighted sum of its neighbors. The matrix defining the neighborhood of the pixel also specifies the weight assigned to each neighbor. This matrix is called the convolution kernel.
If the filter kernel contains both negative and positive coefficients, the transfer function is equivalent to a weighted differentiation and produces a sharpening or highpass filter. Typical highpass filters include gradient and Laplacian filters.
If all coefficients in the kernel are positive, the transfer function is equivalent to a weighted summation and produces a smoothing or lowpass filter. Typical lowpass filters include smoothing and Gaussian filters.
Gradient Filter
A gradient filter highlights the variations of light intensity along a specific direction, which has the effect of outlining edges and revealing texture.
Given the following source image:
A gradient filter extracts horizontal edges to produce the following image.
A gradient filter highlights diagonal edges to produce the following image.
Kernel Definition
A gradient convolution filter is a first-order derivative. Its kernel uses the following model:
where a, b, c, and d are integers and x = 0 or 1.
Filter Axis and Direction
This kernel has an axis of symmetry that runs between the positive and negative coefficients of the kernel and through the central element. This axis of symmetry gives the orientation of the edges to outline. For example, if a = 0, b = –1, c = –1, d = –1, and x = 0, the kernel is the following:
The axis of symmetry is located at 135°.
For a given direction, you can design a gradient filter to highlight or darken the edges along that direction. The filter actually is sensitive to the variations of intensity perpendicular to the axis of symmetry of its kernel. Given the direction D going from the negative coefficients of the kernel towards the positive coefficients, the filter highlights the pixels where the light intensity increases along the direction D, and darkens the pixels where the light intensity decreases.
The following two kernels emphasize edges oriented at 135°.
Prewitt #10 highlights pixels where the light intensity increases along the direction going from northeast to southwest. It darkens pixels where the light intensity decreases along that same direction. This processing outlines the northeast front edges of bright regions such as the ones in the illustration.
Prewitt #2 highlights pixels where the light intensity increases along the direction going from southwest to northeast. It darkens pixels where the light intensity decreases along that same direction. This processing outlines the southwest front edges of bright regions such as the ones in the illustration.
Edge Extraction and Edge Highlighting
The gradient filter has two effects, depending on whether the central coefficient x is equal to 1 or 0.
- If the central coefficient is null (x = 0), the gradient filter highlights the pixels where variations of light intensity occur along a direction specified by the configuration of the coefficients a, b, c, and d. The transformed image contains black-white borders at the original edges, and the shades of the overall patterns are darkened.
If the central coefficient is equal to 1 (x = 1), the gradient filter detects the same variations as mentioned above, but superimposes them over the source image. The transformed image looks like the source image with edges highlighted. Use this type of kernel for grain extraction and perception of texture.
Notice that Prewitt #15 can be decomposed as follows:
This equation indicates that Prewitt #15 adds the edges extracted by the Kernel C to the source image.
Prewitt #15 = Prewitt #14 + Source Image
Edge Thickness
The larger the kernel, the thicker the edges. The following image illustrates gradient west-east 3 × 3.
The following image illustrates gradient west-east 5 × 5.
Finally, the following image illustrates gradient west-east 7 × 7.
Laplacian Filters
A Laplacian filter highlights the variation of the light intensity surrounding a pixel. The filter extracts the contour of objects and outlines details. Unlike the gradient filter, it is omnidirectional.
Given the following source image:
A Laplacian filter extracts contours to produce the following image.
A Laplacian filter highlights contours to produce the following image.
Kernel Definition
The Laplacian convolution filter is a second-order derivative, and its kernel uses the following model:
where a, b, c, and d are integers.
The Laplacian filter has two different effects, depending on whether the central coefficient x is equal to or greater than the sum of the absolute values of the outer coefficients.
Contour Extraction and Highlighting
If the central coefficient is equal to this sum x = 2(|a| + |b| + |c| + |d|), the Laplacian filter extracts the pixels where significant variations of light intensity are found. The presence of sharp edges, boundaries between objects, modification in the texture of a background, noise, or other effects can cause these variations. The transformed image contains white contours on a black background.
Notice the following source image, Laplacian kernel, and filtered image.
If the central coefficient is greater than the sum of the outer coefficients (x > 2(a + b + c + d )), the Laplacian filter detects the same variations as mentioned above, but superimposes them over the source image. The transformed image looks like the source image, with all significant variations of the light intensity highlighted.
Notice that the Laplacian #4 kernel can be decomposed as follows:
This equation indicates that the Laplacian #2 kernel adds the contours extracted by the Laplacian #1 kernel to the source image.
Laplacian #4 = Laplacian #3 + Source Image.
For example, if the central coefficient of Laplacian #4 kernel is 10, the Laplacian filter adds the contours extracted by Laplacian #3 kernel to the source image times 2, and so forth. A greater central coefficient corresponds to less-prominent contours and details highlighted by the filter.
Contour Thickness
Larger kernels correspond to thicker contours. The following image is a Laplacian 3 × 3.
The following image is a Laplacian 5 × 5.
The following image is a Laplacian 7 × 7.
Smoothing Filter
A smoothing filter attenuates the variations of light intensity in the neighborhood of a pixel. It smooths the overall shape of objects, blurs edges, and removes details.
Given the following source image,
a smoothing filter produces the following image.
Kernel Definition
A smoothing convolution filter is an averaging filter whose kernel uses the following model:
where a, b, c, and d are positive integers, and x = 0 or 1.
Because all the coefficients in a smoothing kernel are positive, each central pixel becomes a weighted average of its neighbors. The stronger the weight of a neighboring pixel, the more influence it has on the new value of the central pixel.
For a given set of coefficients (a, b, c, d), a smoothing kernel with a central coefficient equal to 0 (x = 0) has a stronger blurring effect than a smoothing kernel with a central coefficient equal to 1 (x = 1).
Notice the following smoothing kernels and filtered images. A larger kernel size corresponds to a stronger smoothing effect.
Gaussian Filters
A Gaussian filter attenuates the variations of light intensity in the neighborhood of a pixel. It smooths the overall shape of objects and attenuates details. It is similar to a smoothing filter, but its blurring effect is more subdued.
Given the following source image,
a Gaussian filter produces the following image.
Kernel Definition
A Gaussian convolution filter is an averaging filter, and its kernel uses the model
where, a, b, c, and d are positive integers, and x > 1.
Because all the coefficients in a Gaussian kernel are positive, each pixel becomes a weighted average of its neighbors. The stronger the weight of a neighboring pixel, the more influence it has on the new value of the central pixel.
Unlike a smoothing kernel, the central coefficient of a Gaussian filter is greater than 1. Therefore the original value of a pixel is multiplied by a weight greater than the weight of any of its neighbors. As a result, a greater central coefficient corresponds to a more subtle smoothing effect. A larger kernel size corresponds to a stronger smoothing effect.
Nonlinear Filters
A nonlinear filter replaces each pixel value with a nonlinear function of its surrounding pixels. Like the linear filters, the nonlinear filters operate on a neighborhood.
Nonlinear Prewitt Filter
The nonlinear Prewitt filter is a highpass filter that extracts the outer contours of objects. It highlights significant variations of the light intensity along the vertical and horizontal axes.
Each pixel is assigned the maximum value of its horizontal and vertical gradient obtained with the following Prewitt convolution kernels:
Nonlinear Sobel Filter
The nonlinear Sobel filter is a highpass filter that extracts the outer contours of objects. It highlights significant variations of the light intensity along the vertical and horizontal axes.
Each pixel is assigned the maximum value of its horizontal and vertical gradient obtained with the following Sobel convolution kernels:
As opposed to the Prewitt filter, the Sobel filter assigns a higher weight to the horizontal and vertical neighbors of the central pixel.
Nonlinear Prewitt and Nonlinear Sobel Example
This example uses the following source image.
A nonlinear Prewitt filter produces the following image.
A nonlinear Sobel filter produces the following image.
Both filters outline the contours of the objects. Because of the different convolution kernels they combine, the nonlinear Prewitt has the tendency to outline curved contours while the nonlinear Sobel extracts square contours. This difference is noticeable when observing the outlines of isolated pixels.
Nonlinear Gradient Filter
The nonlinear gradient filter outlines contours where an intensity variation occurs along the vertical axis.
Roberts Filter
The Roberts filter outlines the contours that highlight pixels where an intensity variation occurs along the diagonal axes.
Differentiation Filter
The differentiation filter produces continuous contours by highlighting each pixel where an intensity variation occurs between itself and its three upper-left neighbors.
Sigma Filter
The Sigma filter is a highpass filter. It outlines contours and details by setting pixels to the mean value found in their neighborhood, if their deviation from this value is not significant. The example on the left shows an image before filtering. The example on the right shows the image after filtering.
Lowpass Filter
The lowpass filter reduces details and blurs edges by setting pixels to the mean value found in their neighborhood, if their deviation from this value is large. The example on the left shows an image before filtering. The example on the right shows the image after filtering.
Median Filter
The median filter is a lowpass filter. It assigns to each pixel the median value of its neighborhood, effectively removing isolated pixels and reducing detail. However, the median filter does not blur the contour of objects.
You can implement the median filter by performing an Nth order filter and setting the order to (f2 – 1)/2 for a given filter size of f × f.
Nth Order Filter
The Nth order filter is an extension of the median filter. It assigns to each pixel the Nth value of its neighborhood when they are sorted in increasing order. The value N specifies the order of the filter, which you can use to moderate the effect of the filter on the overall light intensity of the image. A lower order corresponds to a darker transformed image; a higher order corresponds to a brighter transformed image.
To see the effect of the Nth order filter, notice the example of an image with bright objects and a dark background. When viewing this image with the Gray palette, the objects have higher gray-level values than the background.
| For a Given Filter Size f × f | Example of a Filter Size 3 × 3 | |
|---|---|---|
|
Order 0 (smooths image, erodes bright objects) |
|
|
Order 4 (equivalent to a median filter) |
|
|
Order 8 (smooths image, dilates bright objects) |
|
In-Depth Discussion
If P(i, j) represents the intensity of the pixel P with the coordinates (i, j), the pixels surrounding P(i, j) can be indexed as follows (in the case of a 3 × 3 matrix):
|
P(i – 1, j – 1) |
P(i, j – 1) |
P(i + 1, j – 1) |
|
P(i – 1, j) |
P(i, j) |
P(i + 1, j) |
|
P(i – 1, j + 1) |
P(i, j + 1) |
P(i + 1, j + 1) |
A linear filter assigns to P(i, j) a value that is a linear combination of its surrounding values.
For example:
P(i, j) = P(i, j – 1) + P(i – 1, j) + 2P(i, j) + P(i + 1, j) + P(i, j + 1)A nonlinear filter assigns to P(i, j) a value that is not a linear combination of the surrounding values.
For example:
P(i, j) = max(P(i – 1, j – 1), P(i + 1, j – 1), P(i – 1, j + 1), P(i + , 1j + 1))
In the case of a 5 × 5 neighborhood, the i and j indexes vary from –2 to 2. The series of pixels that includes P(i, j) and its surrounding pixels is annotated as P(n, m).
Linear Filters
For each pixel P(i, j) in an image where i and j represent the coordinates of the pixel, the convolution kernel is centered on P(i, j). Each pixel masked by the kernel is multiplied by the coefficient placed on top of it. P(i, j) becomes either the sum of these products divided by the sum of the coefficient or 1, depending on which is greater.
In the case of a 3 × 3 neighborhood, the pixels surrounding P(i, j) and the coefficients of the kernel, K, can be indexed as follows:
| P(i – 1, j – 1) | P(i, j – 1) | P(i + 1, j – 1) |
| P(i – 1, j) | P(i, j) | P(i + 1, j) |
| P(i – 1, j + 1) | P(i, j + 1) | P(i + 1, j + 1) |
| K(i – 1, j – 1) | K(i, j – 1) | K(i + 1, j – 1) |
| K(i – 1, j) | K(i, j) | K(i + 1, j) |
| K(i – 1, j + 1) | K(i, j + 1) | K(i + 1, j + 1) |
The pixel P(i, j) is given the value (1 / N)∑ K(a, b)P(a, b), with a ranging from (i – 1) to (i + 1), and b ranging from (j – 1) to (j + 1). N is the normalization factor, equal to ∑ K(a, b) or 1, whichever is greater.
If the new value P(i, j) is negative, it is set to 0. If the new value P(i, j) is greater than 255, it is set to 255 (in the case of 8-bit resolution).
The greater the absolute value of a coefficient K(a, b), the more the pixel P(a, b) contributes to the new value of P(i, j). If a coefficient v is 0, the neighbor P(a, b) does not contribute to the new value of P(i, j) (notice that P(a, b) might be P(i, j) itself).
If the convolution kernel is:
then P(i, j) = (–2P(i – 1, j) + P(i, j) + 2P(i + 1, j))
If the convolution kernel is:
then P(i, j) = (P(i, j – 1) + P(i – 1, j) + P(i + 1, j) + P(i, j + 1))
Nonlinear Prewitt Filter
Nonlinear Sobel Filter
Nonlinear Gradient Filter
The new value of a pixel becomes the maximum absolute value between its deviation from the upper neighbor and the deviation of its two left neighbors.
P(i, j) = max[|P(i, j – 1) – P(i, j)|, |P(i – 1, j – 1) – P(i – 1, j)|]
Roberts Filter
The new value of a pixel becomes the maximum absolute value between the deviation of its upper-left neighbor and the deviation of its two other neighbors.
P(i, j) = max[|P(i – 1, j – 1) – P(i, j)|, |P(i, j – 1) – P(i – 1, j)|]
Differentiation Filter
The new value of a pixel becomes the absolute value of its maximum deviation from its upper-left neighbors.
P(i, j) = max[|P(i – 1, j) – P(i, j)|, |P(i – 1, j – 1) – P(i, j)|, |P(i, j – 1) – P(i – 1, j)|]
Sigma Filter
Given M, the mean value of P(i, j) and its neighbors, and S, their standard deviation, each pixel P(i, j) is set to the mean value M if it falls inside the range [M – S, M + S].
Lowpass Filter
Given M, the mean value of P(i, j) and its neighbors, and S, their standard deviation, each pixel P(i, j) is set to the mean value M if it falls outside the range [M – S, M + S].
Median Filter
P(i, j) = median value of the series [P(n, m)]
Nth Order Filter
P(i, j) = Nth value in the series [P(n, m)]
where the P(n, m) are sorted in increasing order.
The following example uses a 3 × 3 neighborhood.
The following table shows the new output value of the central pixel for each Nth order value.
| Nth Order | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
| New Pixel Value | 4 | 5 | 5 | 6 | 8 | 9 | 10 | 12 | 13 |
Notice that for a given filter size f, the Nth order can rank from 0 to f 2 – 1. For example, in the case of a filter size 3, the Nth order ranges from 0 to 8 (32 – 1).