NI Vision Concepts Help

Table of Contents

Geometric Matching Technique

  • Updated2023-05-01
  • 12 minute(s) read

Searching and matching algorithms, such as the pattern matching algorithm or geometric matching algorithm, find regions in the inspection image that contain information similar to the information in the template. This information, after being synthesized, becomes the set of features that describes the image. Pattern matching and geometric matching algorithms use these sets of features to find matches in inspection images.

Pattern matching algorithms use the pixel intensity information present in the template image as the primary feature for matching. Geometric matching algorithms uses geometric information present in the template image as the primary features for matching. Geometric features can range from low-level features, such as edges or curves, to higher-level features, such as the geometric shapes made by the curves in the image.

The geometric matching process consists of two stages: learning and matching. During the learning stage, the geometric matching algorithm extracts geometric information from the template image. The algorithm organizes and stores the information and the spatial relationships between these features in a manner that facilitates faster searching in the inspection image. In NI Vision, the information learned during this stage is stored as part of the template image.

During the matching stage, the geometric matching algorithm extracts geometric information from the inspection image that correspond to the information in the template image. Then, the algorithm finds matches by locating regions in the inspection image where features align themselves in spatial patterns similar to the spatial patterns of the features in the template.

NI Vision includes two geometric matching methods. Both geometric matching techniques rely on curves extracted from image to perform the matching. The two geometric matching techniques differ in how the curve information is used to perform the matching. The edge-based geometric matching method computes the gradient value of the edge at each point along the curves found in the image and uses the gradient value and the position of the point from the center of the template to perform the matching. The feature-based geometric matching method extracts geometric features from the curves and uses these geometric features to perform the matching.

The following figure shows the information from the template image that the geometric matching algorithm may use as matching features. Figure A shows the curves that correspond to edges in the template image. These curves form the underlying information that is used by the edge-based geometric matching technique. Figure B shows higher-level shape features that the feature-based geometric algorithm uses for matching. Refer to the Choosing The Right Geometric Matching Technique section to select the best geometric matching method for your application.

1  Curves
2  Circular Features
3  Rectangular Features
4  Linear Features
5  Corners

Curve Extraction

A curve is a set of edge points that are connected to form a continuous contour. Curves typically represent the boundary of the part in the image. In geometric matching, curves are the underlying information used to represent a template and to match the template in an inspection image. This section describes how curves are extracted from an image.

The curve extraction process consists of three steps: finding curve seed points, tracing the curve, and refining the curves.

Finding Curve Seed Points

A seed point is a point on a curve from which tracing begins. To qualify as a seed point, a pixel cannot be part of an already existing curve. Also, the pixel must have an edge contrast greater than the user-defined Edge Threshold. The edge contrast at a pixel is computed as a function of the intensity value at that pixel and the intensities of its neighboring pixels. If P(i, j) represents the intensity of the pixel P with the coordinates (i, j), the edge contrast at (i, j) is defined as

(P(i – 1, j)P(i + 1, j))2 + (P(i, j – 1)P(i, j + 1))2

For an 8-bit image, the edge contrast may vary from 0 to 360.

To increase the speed of the curve extraction process, the algorithm visits only a limited number of pixels in the image to determine if the pixel is a valid seed point. The number of pixels to visit is based on the values that the user provides for the Row Step and Column Step parameters. The higher these values are, the faster the algorithm searches for seed points. However, to make sure that the algorithm finds a seed point on all of the curves, Row Step must be smaller than the smallest curve in the y direction, and Column Step must be smaller than the smallest curve in the x direction.

The algorithm starts by scanning the image rows from the top left corner. Starting at the first pixel, the edge contrast of the pixel is computed. If the edge contrast is greater than the given threshold, the curve is traced from this point. If the contrast is lower than the threshold, or if this pixel is already a member of an existing curve previously computed, the algorithm analyzes the next pixel in the row to determine if it qualifies as a seed point. This process is repeated until the end of the current row is reached. The algorithm then skips Row Step rows and repeats the process.

After scanning all of the rows, the algorithm scans the image columns to locate seed points. The algorithm starts at the top left corner and analyzes each column that is Column Step apart.

Tracing the Curve

When it finds a seed point, the curve extraction algorithm traces the rest of the curve. Tracing is the process by which a pixel that neighbors the last pixel on the curve is added to the curve if it has the strongest edge contrast in the neighborhood and the edge contrast is greater than acceptable edge threshold for a curve point. This process is repeated until no more pixels can be added to the curve in the current direction. The algorithm then returns to the seed point and tries to trace the curve in the opposite direction. the following figure illustrates this process.

1  Scan Lines
2  Row Step
3  Column Step
4  Curve Seeds
5  Curves
Note Note  To simplify the figure, Row Step and Column Step are not smaller than the smallest feature.

Refining the Curve

During the final stage of curve extraction, the algorithm performs the following tasks to refine the extracted curves:

  • Combines curves into one large curve if their end points are close together.
  • Closes a curve if the end points of the curve are within a user-defined distance of each other.
  • Removes curves that fall below a certain size threshold defined by the user.

Edge-Based Geometric Matching

This section describes the learning and matching stages of the edge-based geometric matching technique. The edge-based technique utilizes the generalized Hough transform method for matching. The generalized Hough transform is an extension of the Hough transform to detect arbitrary shapes.1

Learning

The learning stage consists of two steps: edge point extraction and R-table generation.

Edge Point Extraction

During the edge point extraction stage, the algorithm detects curves in the image and computes the gradient value (ø) at each edge point along the contours. The gradient value specifies the orientation of the tangential line at each point along the contour.

R-Table Generation

The generalized Hough transform uses a lookup table called an R-table to store the shape of the object. The R-table allows the generalized Hough transform to represent any arbitrary shape and does not require a parametric description of the object.

The algorithm uses the following steps to compute the R-Table of a given shape (specified by the curves that are detected along the boundary of the shape).

  1. The algorithm selects the center of the template image as the reference point (xc, yc).
  2. For each point (xi, yi) along the curves in the template image, the algorithm calculates the distance and orientation (ri, θi) from the reference point as shown in the figure below:

  3. The algorithm stores the (ri, θi) value for each point in a R-table as a function of ø, as shown in the following table:
    Gradient Value (ø) r, θ Values
    ø1 (r1, θ1), (r4, θ4)
    ø2 (r2, θ2), (r10, θ10)
    øn (rn, θn), (ri, θi)

After the algorithm adds all points along the curves in the template image, the R-table represents the information that is learned from the template. The R-table can be used to regenerate the contour edge points and gradient angles at any point in the image during the matching phase.

An R-table stores the shift-invariant representation of the template object. Because each combination of scale and rotation requires a unique R-table, a template that allows variance in scale and rotation can occupy a large amount of memory. To reduce the size of the template and improve the speed of the matching process, NI Vision may sample the template image before computing the R-tables. By default, the software automatically determines the sampling factor. Use the advanced learn options to manually specify a sampling factor.

Matching

The matching stage consists of three steps. The first step is edge point extraction, which is similar to the edge point extraction that occurs during the learning stage. The final two steps are generalized Hough matching and match refinement.

Edge Point Extraction

The edge points in the image are detected using the curve extraction process described in the learning section. If the size of the template image was reduced by sampling, then the inspection image is reduced by the same sampling factor before the curves are detected. The gradient value is computed and stored at each edge point along the detected curves.

Generalized Hough Matching

The matching process begins after the algorithm finds edge points and their gradient values in the inspection image. The matching process consists of the following steps:

  1. The algorithm creates an accumulator, which stores candidate match locations in the inspection image.
  2. The algorithm performs the following actions for each edge point (x, y):
    1. The algorithm uses the gradient value ø to index into the R-table and retrieve all the (r, θ) values.
    2. The algorithm computes the candidate reference point for each (r, θ) value as follows:

      xc = xr cos(θ)
      xc = xr sin(θ)

    3. The algorithm increases the count in the accumulator for the location of the candidate reference point.
  3. The algorithm finds the local peaks in the accumulator. These peaks represent possible match locations.
  4. If matching for variation in rotation or scale, the algorithm builds an accumulator for each possible combination of rotation and scale, and performs steps 1–3 for each accumulator.
  5. The algorithm processes the peaks in each accumulator to find the best matches.

Match Refinement

Match refinement is the final step in the matching stage. The algorithm uses curves extracted from both the template image and inspection image to ensure increased positional, scalar, and angular accuracy.

Feature-Based Geometric Matching

This section describes the learning and matching stages of the feature-based geometric matching technique.

Learning

Following curve extraction, the learning stage consists of two steps: feature extraction and representation of the spatial relationships between the features.

Feature Extraction

Feature extraction is the process of extracting high-level geometric features from the curves obtained from curve extraction. These features can be lines, rectangles, corners, or circles.

First, the algorithm approximates each curve using polygons. Then, the algorithm uses the line segments forming these polygons to create linear and corner features. These linear features are used to compose higher-level rectangular features. The curves or segments of curves that cannot be approximated well with polygons or lines are used to create circular features.

After the algorithm extracts high-level geometric features from the template image, the features are ordered based on the following criteria:

  • Type—Lines, rectangles, corners, or circles
  • Strength—How accurately the features portray a given geometric structure
  • Saliency—How well the features describe the template

After the features have been ordered, the best are chosen to describe the template.

Representation of Spatial Relationships

Given two features, the algorithm learns the spatial relationship between the features, which consists of the vector from the first feature to the second feature. These spatial relationships describe how the features are arranged spatially in the template in relationship to one another. The algorithm uses these relationships to create a model of features that describes the template. The algorithm uses this template model during the matching stage to create match candidates and to verify that matches are properly found.

Matching

The matching stage consists of five main steps. The first two steps performed on the inspection image are curve extraction and feature extraction, which are similar to the curve extraction and feature extraction that occur during the learning stage. The final three steps are feature correspondence matching, template model matching, and match refinement.

Feature Correspondence Matching

Feature correspondence matching is the process of matching a given template feature to a similar type of feature in the inspection image, called a target feature. The algorithm uses feature correspondence matching to do the following:

  • Create an initial set of potential matches in the inspection image.
  • Update potential matches with additional information or refined parameters, such as position, angle, and scale.

Template Model Matching

Template model matching is the process of superimposing the template model from the learning step onto a potential match in the inspection image to confirm that the potential match exists or to improve the match. After superimposing the template model onto a potential match, the presence of additional target features found in accordance with the template model and its spatial relationships to existing features confirms the existence of the potential match and yields additional information that the algorithm uses to update and improve the accuracy of the match.

Match Refinement

Match refinement is the final step in the matching stage. Match refinement carefully refines known matches for increased positional, scalar, and angular accuracy. Match refinement uses curves extracted from both the template image and inspection image to ensure that the matches are accurately and precisely found.

Choosing The Right Geometric Matching Technique

The edge-based geometric matching technique works on any arbitrary shape and is guaranteed to find the object in the inspection image as long as a significant portion of the shape remains similar to the shape of the template object. There are no restrictions on the shape of the object in the template. As long as the curves detected around the object in the inspection image duplicate the curves that were extracted in the template image, the edge-based geometric matching technique will find the match.

The feature-based geometric matching technique works on the assumption that the shape of the pattern in the template can be reliably represented by a set of geometric features. This technique should be employed only when the pattern in the template and in the inspection images can be consistently and reliably represented by geometric shapes such as circles, rectangles and lines.

The memory and performance requirements of your application may influence which geometric matching technique you use. In general, an edge-based geometric template uses more memory than a feature-based geometric template. The size disparity between the template types increases with the permitted variance in scale. The more scale changes you want to match for, the larger the size of the edge-based template. The edge-based geometric matching technique is also slower than the feature-based geometric matching technique when matching at different scale ranges.

Follow these recommendations to choose the best geometric matching technique for your application:

  • Always start with the edge-based geometric matching algorithm. The edge-based geometric matching algorithm provides the best recognition results.
  • If the performance or memory requirements of the edge-based geometric matching algorithm and template do not meet the requirements of your application, carefully adjust the match ranges for variance in scale or rotation. For example, if the match object in the inspection image is always the same size and rotates ±10 degrees, then learn the template only for a scale range of 100% and a rotation range of –10 to 10 degrees. The performance of the edge-based method can also be improved by setting the factor by which the template and inspection are sampled at before the matching is done. Use the advanced learn options to specify a sampling factor.
  • If you still cannot reach the performance or memory requirements of your application, and the object you need to match contains geometric features that can be reliably extracted, use the feature-based geometric matching algorithm.

1 For more information about the Hough transform, see Ballard, D.H. "Generalizing the Hough Transform to Detect Arbitrary Shapes," Pattern Recognition 13, no. 2 (1981) 111-122.

Log in to get a better experience