What's New in the NI Vision Development Module 2010

Publish Date: Sep 02, 2016 | 0 Ratings | 0.00 out of 5 | Print

Overview

The NI Vision Development Module 2010 includes many new features and performance enhancements. This document provides an overview of the major new features, algorithms, and usability improvements and describes how these features can benefit you when you are implementing your vision system.
For a full list of features, refer to the readme file. Also, view the following webcast to see some of these features in action: Boost Your Productivity with Vision 2010.

Table of Contents

Contents

NI LabVIEW 2010 Native Image Shared Variable

See how to use this feature by watching the following webcast: Boost Your Productivity with Vision 2010.

While not a Vision Development Module feature, this new LabVIEW 2010 feature makes it easier to build vision applications using LabVIEW projects and multiple hardware targets, including real-time systems.

When creating a shared variable from a LabVIEW project, the “Image” option appears in the Data Type drop-down list as shown in Figure 1. 

Figure 1. New image type option for shared variables

 

The variable allows for easy transfer of images over a network and easy integration among hardware targets. With this, you can visualize inspection images from an embedded target on a remote touch panel or monitor and even log images to remote network drives.

Figure 2. The new image shared variable makes it easier to perform remote logging and visualization.

U16 Image Support

The new image support makes it easier to analyze and process images while keeping them in their native formats. The U16 image type is used for cameras and imaging systems that have a large dynamic range such as thermal cameras, scientific cameras, and high-dynamic-range cameras used in machine vision.

For a full list of functions supporting 16-bit images in Vision Development Module in 2010, refer to Supported Image Types in the NI Vision 2010 for LabVIEW Help.

 

Texture Segmentation

With the new texture segmentation algorithm, you can detect flaws such as scratches, blemishes, dents, wrinkles, blisters, and streaks in parts with textured surfaces. Such materials can include finished goods with a textured coating as well as tiles, leather, textiles, and plastic and foam covers or surfaces that may contain cosmetic defects that stand out with respect to their textured appearances.

Figure 3. Defect located on a textured region of a toy remote control.

With the new Texture Training Interface, you can easily train your classifier to recognize the expected texture, and you can determine the settings that best characterize the cosmetic defects and import them into LabVIEW software.

Figure 4. NI Texture Training Interface with image of tile.

Contour Analysis

See this feature in action by watching the following webcast: Boost Your Productivity with Vision 2010.

In many industrial applications, metrology measurements relating to the shape of a part, including the size and smoothness of the boundary, are used for quality control purposes. The new contour analysis functions in the Vision Development Module 2010 make it easier to analyze the smoothness of a contour to detect chips, burrs, indents, and other shape deformities. 

The new functions offer steps for contour creation, fitting an equation to a contour, learning and matching a reference contour in an image, computing the curvature of single contours, and comparing pairs of contours. You can obtain detailed information about how much the contour of object shape deviates in distance and in curvature from the desired contour as well as location information indicating exactly where these deviations occur along the object’s contour.

This method is a desirable option compared to other algorithms because it provides more accurate and more detailed information about a part’s shape, so you can make effective decisions regarding whether the part meets its specifications.

 Figure 5. VI of contour analysis of a can and deviation from expected contour in graph.

Color Segmentation

Color segmentation is used to detect regions of specific colors within an image and separate these from regions containing unknown colors. Using this method, it is possible to:

  • Separate objects of different colors that appear in an inspection image to determine whether all expected colors are present or which proportions of each color are present
  • Separate colored objects or regions from background colors
  • Check whether an undesired object or feature characterized by a different color is present

For some of these use cases, color segmentation may be required to obtain regions of interest (ROIs) representing particular colors prior to running further analysis (for example, the region location, size, color, and other shape characteristics).

Color segmentation techniques apply to a range of applications involving quality and process control. Examples include checking that all crayon colors are different and present in a package and verifying that a fried or baked good does not contain under- or over-cooked regions.

Figure 6. Color segmentation used in conjunction with color classification.

Optical Flow Algorithms

See these optical flow techniques in action by watching the following webcast: Boost Your Productivity with Vision 2010.

Optical flow is a technique used to determine the movement of objects or features in a sequence of images. It provides the motion vectors that describe the motion in an image sequence.

Optical flow methods try to calculate the motion between two image frames, which are taken sequentially in time at every pixel position based on changes in intensity from image to image. These methods are called differential because they are based on local Taylor series approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates.

The new algorithms are able to detect movement using two methods that apply to slightly different applications. In the unsupervised method, the algorithms look for the relative movement of features within a set of images, for example, with particles or bubbles in a moving fluid. The supervised technique, on the other hand, estimates the motion of user-specified points in the image. Supervised methods are used to track motion in applications such as crash test image sequence analysis, vehicle traffic monitoring, and biomechanics.

The unsupervised method is based on the Horn-Schunck and Lucas-Kanade optical flow estimation methods, which estimate the motion at every pixel location across a series of images. The supervised method is based on the Lucas-Kanade Pyramid method to track the motion of a set of points across a series of images.

Figure 7. (a) Unsupervised optical flow with bubbles and (b) supervised optical flow of confetti.

Resources and Next Steps

See many of these new features in action in this webcast

Learn more about the Vision Development Module 

Download an evaluation copy of the Vision Development Module

Purchase the Vision Development Module

Back to Top

Bookmark & Share


Ratings

Rate this document

Answered Your Question?
Yes No

Submit