What's New in NI Vision Development Module 2011

Publish Date: Jul 29, 2011 | 1 Ratings | 5.00 out of 5 |  PDF

Overview

The NI Vision Development Module 2011 includes many new features and performance enhancements. This document provides an overview of the major new algorithms and usability improvements and describes how these features can benefit you when you are implementing your vision system. For a full list of features, refer to the readme file for NI Vision Development Module 2011 and Vision Development Module 2010 sp1.

Table of Contents

  1. Improved Maximum (Max) Clamp Feature for Metrology
  2.  New Calibration Functions
  3. Support for New High-Performance NI 177x Smart Cameras
  4. Updated .NET and C API
  5.  Data Matrix Decoding Improvements 
  6. Morphological Reconstruction 
  7. New Structural Similarity (SSIM) Method for Image Quality Analysis 

1. Improved Maximum (Max) Clamp Feature for Metrology

The NI Vision Development Module 2011 introduces an improved clamp feature with subpixel accuracy for measuring maximum clamp distances in images. Subpixel-accurate maximum clamp measurements are useful in a range of metrology and packaging assembly applications, for example, in determining where to move tooling, such as parallel jaws mounted at the end of an industrial robot, to properly clamp and pick up parts.

Unlike the previous implementation, which used rake edge detection to detect a limited set of points along an object contour, the new implementation of this feature uses curve extraction to provide highly accurate and intuitive results. It determines whether a maximum clamp measurement is present and, if present, returns the measured distance in pixels or real-world co-ordinates along with the angle of the measurement relative to the orientation of the region of interest (ROI). 

The new feature implementation is less susceptible to errors caused by arbitrary alignment of the ROI and irrelevant sharp contrast changes in the image from noise or foreign objects. The tool also provides additional flexibility by offering input parameters to choose rising, falling or any edges as well as angle tolerances for the maximum clamp measurement relative to the ROI orientation.

 

Figure 1: Examples of Maximum Clamp Measurements

 

Back to Top

2.  New Calibration Functions

Calibration models help correct the perspective and nonlinear distortion introduced into vision systems from lenses and cameras. These models are useful for making accurate gauging measurements in real-world units, and can serve as a starting point to estimate the pose of a plane and for 3D stereovision algorithms when used to locate parts in pick-and-place robotics applications.

The Vision Development Module 2011 offers a new set of calibration tools to treat radial distortion (due to the lens) as well as tangential distortion (due to the misalignment of a CCD sensor). The new tools calculate parameters associated with specific lens and camera combinations (the distortion coefficient, optical center, and focal length) and allow these parameters to be saved for the given setups. Traditional algorithms retain information only from the calibration grid region and leave holes in the calculated calibration information of an image area, but these new calibration tools compute the intrinsic parameters of a camera-lens combination to better model the overall distortion of the setup.

In contrast with the calibration functions in earlier versions of the Vision Development Module, the new functions provide increased accuracy, model and correct for distortion in the entire image region, and boost performance by storing the distortion model parameters instead of attaching calculated calibration information to every image. They can also learn the distortion models with multiple calibration grids, which can be useful when the grid is too small to cover the entire field of view and when you need to improve the estimation of calibration parameters.

In addition, the Vision Development Module 2011 introduces the NI Calibration Training Interface, an interactive interface for calculating and storing calibration parameters and viewing results.


Figure 2: The NI Calibration Training Interface is an interactive environment for viewing calibration results and saving distortion models for reuse in applications.

 

Back to Top

3. Support for New High-Performance NI 177x Smart Cameras

NI Vision Development Module 2011 adds support for the high-performance family of NI 177x Smart Cameras with powerful Intel Atom processors, color and high-resolution sensor options, as well as dust- and water-proof designs. Like the entire line of NI Smart Cameras, the NI 177x models come with the reliability and determinism of a real-time OS.  The powerful 1.6GHz Intel ATOM processor provides a performance boost of up to 4x for all algorithms above the PowerPC-based NI 172x and 174x models.
Learn more at ni.com/smartcamera.


Figure 3: High-Performance NI 177x Smart Cameras

 

Back to Top

4. Updated .NET and C API

Several algorithms and features have been added to the .NET and LabWindows™/CVI APIs in NI Vision Development Module 2011, including the following:

The mark LabWindows is used under a license from Microsoft Corporation.  Windows is a registered trademark of Microsoft Corporation in the United States and other countries.

 

In addition to these new features in Version 2011 of the Vision Development Module, you can still take advantage of Version 2010 sp1 improvements in data matrix decoding, morphological reconstruction, and structural similarity measurements for image quality analysis.

Back to Top

5.  Data Matrix Decoding Improvements 

Data matrix decoding improvements in Version 2010 of the Vision Development Module provide added reliability for identification applications while improving the autodetect mode that automatically selects the best parameters for accurate and repeatable decoding.

The improved algorithms better tolerate variations between samples, skew within plane, occlusion, cluttered or streaked backgrounds, low-contrast differences, and proximity of the data matrix to the image border or edge of inspection region. Also, the updated implementation, which uses line deduction algorithms in addition to edge-based algorithms, renders the data matrix search within an image more precise and does not require ROIs to be specified by the user.

These improvements are especially useful for postal sorting, pharmaceutical packaging verification, dot printing in the semiconductor industry, and identification codes stamped on metal for the aerospace and automotive industries.


Figure 4: Examples of Data Matrix Codes (clockwise from top left): (a) with Cluttered Background; (b) with Low Contrast; (c) Occluded; (d) with Reflective Background; (e) Saturated; (f) Under Translucent Film.

Back to Top

6. Morphological Reconstruction 

Morphological reconstruction is a technique that uses dilatory construct operations to keep particles and uses erosion- based reconstruction to keep holes to reconstruct a source image based on a marker image input.   The algorithm can be applied to binary and grayscale images.  For binary images, the result of the algorithm is that objects in the source image that overlap the objects contained in the marker image are retained in the resulting image.   In grayscale images, the result of the dilation can be useful for a range of reasons, including the removal of certain features while preserving others entirely, the segmentation of image regions based on their grayscale values, H-Dome extraction and shadow removal.

This technique is useful in analyzing medical images of the body, as well as finding defects in textile manufacturing.

Image to come
Figure 5
: Examples of morphological reconstruction.

 

Back to Top

7. New Structural Similarity (SSIM) Method for Image Quality Analysis 

The structural similarity (SSIM) index measures the similarity between two images in a manner that is more consistent with human perception than traditional techniques like mean square error (MSE).  For example, blurred images are perceived as bad quality by the human eye, and this is consistent with results from the SSIM metric, unlike the MSE method, which claims a blurred image is similar to its focused original. 
Because of its correlation to human perception, SSIM has become an accepted part of image quality and video analysis practices for analyzing compressed video data.
Table 1 shows the results for the MSE and SSIM methods when analyzing different types of image quality defects.  SSIM results closer to 1 are considered similar, while MSE values of 0 yield the same result. 

Table 1: Comparison of SSIM and MSE Method Results for Image Similarity

  Images SSIM MSE
Original Image
(Reference)

1

0

Blurred Image

0.5257

269.469

Dilated Image

0.6504

769.773

Edge Enhanced Image

0.5325

1014.96

Equalized Image

0.8094

2447.2

Image with addition of Constant Intensity

0.7618

1353.9

Image compressed with JPEG

0.8021

167.476

Image with Pepper Noise

0.5406

253.358

Learn more: Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004.

Back to Top

Bookmark & Share


Ratings

Rate this document

Answered Your Question?
Yes No

Submit