Convolution Kernels
- Updated2025-11-25
- 3 minute(s) read
A convolution kernel defines a 2D filter that you can apply to a grayscale image. A convolution kernel is a 2D structure whose coefficients define the characteristics of the convolution filter that it represents. In a typical filtering operation, the coefficients of the convolution kernel determine the filtered value of each pixel in the image. Vision provides a set of convolution kernels that you can use to perform different types of filtering operations on an image. You also can define your own convolution kernels, thus creating custom filters.
When to Use
Use a convolution kernel whenever you want to filter a grayscale image. Filtering a grayscale image enhances the quality of the image to meet the requirements of your application. Use filters to smooth an image, remove noise from an image, enhance the edge information in an image, and other related edits.
Concepts
A convolution kernel defines how a filter alters the pixel values in a grayscale image. The convolution kernel is a 2D structure whose coefficients define how the filtered value at each pixel is computed. The filtered value of a pixel is a weighted combination of its original value and the values of its neighboring pixels. The convolution kernel coefficients define the contribution of each neighboring pixel to the pixel being updated. The convolution kernel size determines the number of neighboring pixels whose values are considered during the filtering process.
In the case of a 3 × 3 kernel, illustrated in figure A, the value of the central pixel (shown in black) is derived from the values of its eight surrounding neighbors (shown in gray). A 5 × 5 kernel, shown in figure B, specifies 24 neighbors, a 7 × 7 kernel specifies 48 neighbors, and so forth.
- Kernel
- Image
A filtering operation on an image involves moving the kernel from the leftmost and topmost pixel in the image to the rightmost and bottommost point in the image. At each pixel in the image, the new value is computed using the values that lie under the kernel, as shown in the following illustration.
When computing the filtered values of the pixels that lie along the border of the image (the first row, last row, first column, or last column of pixels), part of the kernel falls outside the image. For example, the following figure shows that one row and one column of a 3 × 3 kernel fall outside the image when computing the value of the topmost leftmost pixel.
- Border
- Image
- Kernel
Vision automatically allocates a border region when you create an image. The default border region is three pixels deep and contains pixel values of 0. You also can define a custom border region and specify the pixel values within the region. The size of the border region should be greater than or equal to half the number of rows or columns in your kernel. The filtering results from along the border of an image are unreliable because the neighbors necessary to compute these values are missing, therefore decreasing the efficiency of the filter, which works on a much smaller number of pixels than specified for the rest of the image. For more information about border regions, refer to the digital images section.