What's New in the NI Vision Development Module

Publish Date: Feb 13, 2019 | 8 Ratings | 4.62 out of 5 | Print


The Vision Development Module is designed to help you develop and deploy machine vision applications. It includes hundreds of functions to acquire images from a multitude of cameras and to process images by enhancing them, checking for presence, locating features, identifying objects, and measuring parts.

Table of Contents

  1. Vision Development Module 2017
  2. Vision Development Module 2016
  3. Vision Development Module 2015
  4. Vision Development Module 2014
  5. Vision Development Module 2013
  6. Vision Development Module 2011
  7. Vision Development Module 2010
  8. Vision Development Module 2009
  9. Summary and Next Steps

1. Vision Development Module 2017

The NI Vision Development Module 2017 includes new functions and features to help you accelerate and improve your embedded machine vision applications as well as PC-based applications. Improvements include additional FPGA image processing IP, Defect Map Inspection, extended support for pattern matching algorithms, support for Modified Sauvola, and new NI Vision Assistant features. This document provides an overview of these updates.

Additional FPGA Image Processing IP

FPGA image processing IP was introduced as part of the Vision Development Module 2014 and NI has continued to make investments in improving our FPGA image processing IP since. The new FPGA IP investments allow you to accelerate your image processing with the use of FPGAs. The added improvements include:

8-Pixel Processing  

In the Vision Development Module 2017 users will have access to extended support for multi-pixel (x8) FPGA IPs capable of processing 8 pixels in parallel. These are fully supported on NI hardware with Kintex-7 FPGAs. Specifically, the Vision Development Module 2017 added 8- pixel support for:

  • Modified Sauvola
  • Histogram
  • Flat field correction
  • BCG Lookup
  • Color Support
    • Color Threshold
    • Bayer to RGB32 Bilinear and VNG
    •  Extract Color Planes
    •  HSL32 to RGB32
    • Integer to HSL32
    • Integer to RGB32
    • B32 to HSL32
    • Cast
    • Extract
    • Absolute Difference
    • Add
    • Divide
    • LogDiff
    • Mask
    • Modulo
    • MulDiv
    • Multiply
    • Subtract
    • Or
    • Xor
    • ROIToMask

Single Pixel (x1) Processing  

In the Vision Development Module 2017, single-pixel (x1) FPGA IP was added for Modified Sauvola and Extract. 

Additional Functions and Features

Defect Map Inspection

In the Vision Development Module 2017, Defect Map Inspection was added. This feature encompasses support for new pattern matching score-based defect inspection techniques. The specific VIs made available to users are:

IMAQ Calculate Defect Map VI - This VI can be used to create a defect map for each match found between template and match image.

IMAQ Match Pattern 4 VI - Within this new pattern matching VI, an Enable Defect Map was added to prepare a defect map computation between the template and match image.

IMAQ Get Template Information VI - This VI has been modified to return a weight map from the learned template image.

Extended Support for Pattern Matching Algorithms

Starting in the Vision Development Module 2017 users can now implement Pattern Matching algorithms on U16 and I16 image types through the added 16 – bit support. The specific VIs used for 16-bit pattern matching will be IMAQ Learn Pattern 5 VI and IMAQ Match Pattern 4 VI. This will allow you to increase the precision of your pattern matching algorithms so that you can detect smaller differences and build more robust vision algorithms.

Figure 1: Pattern Matching 16-bit Support and Score Mapping Results

Support for Modified Sauvola

In the Vision Development Module 2016, support for the Sauvola algorithm was added. In the Vision Development Module 2017, a Modified Sauvola Local Threshold algorithm has been added to give you access to a computationally optimized thresholding tool.

New NI Vision Assistant Features

The NI Vision Assistant is a tool for prototyping vision applications and is included in the VDM Module. NI continues to make investments in the NI Vision Assistant so that you can utilize it for algorithm development. In VDM 2017 the following features were added:

Map Defects step - This new feature allow you to perform score-based pattern matching defect inspection techniques. See Defect Map Inspection above for more details.

Support for 16-bit images in Pattern Matching step - Users can now implement Pattern Matching algorithms of U16 and I16 image types through the added 16 – bit support.

Support for new multi-pixel (x8) and single-pixel (x1) FPGA IP – The new FPGA IP investments allow you to accelerate your image processing with the use of FPGAs. See above list for the functions added in 2017.


Back to Top

2. Vision Development Module 2016

The NI Vision Development Module 2016 includes new functions and features to help you accelerate and improve your embedded machine vision applications as well as PC-based applications. Improvements include additional FPGA image processing IP, extended AVI support, a new Sauvola Local Threshold Algorithm, improved algorithm performance, and new NI Vision Assistant features. This document provides an overview of these updates.

Additional FPGA Image Processing IP

FPGA image processing IP was introduced as part of the Vision Development Module 2014 and NI has continued to make investments in improving our FPGA image processing IP since. The new FPGA IP investments allow you to accelerate your image processing with the use of FPGAs. The added improvements include:

8-Pixel Processing  

In the Vision Development Module 2016, NI has added FPGA IP that is capable of multi-pixel (x8) parallel processing for higher throughput applications. This feature is available only on Kintex-7 FPGA targets. Specifically, the Vision Development Module 2016 added 8- pixel support for:

  • Cast
  • Vision FPGA Sync
  • Binary
  • Gray
  • Convolute
  • Low Pass
  • Edge Detection: Sobel
  • Edge Detection: Prewitt
  • Edge Detection: Roberts
  • Edge Detection: Differentiation
  • Edge Detection: Sigma
  • Edge Detection: Gradient
  • Inverse
  • Threshold
  • Add
  • Subtract
  • Multiply
  • Divide
  • Absolute Difference
  • Muldiv
  • Modulo
  • And
  • Or
  • Xor
  • LogDiff
  • Mask
  • Compare
  • Centroid
  • Linear Averages
  • Particle Analysis
  • ROI to Mask
  • FIFO to Pixel Bus
  • Pixel Bus to FIFO


Figure 1: 8 - pixel FPGA processing support

Single Pixel (x1) Processing 

In the Vision Development Module 2016, single-pixel (x1) FPGA IP for Local Threshold Niblack and Sauvola methods were added. You can use these functions as local thresholding techniques for images where the background is not uniform. 

Additional Functions and Features

Extended AVI Support

In the Vision Development Module 2016, there is extended AVI support on NI Linux Real-Time hardware targets. Prior to the Vision Development Module 2016, you could only use the AVI VIs on a Windows OS. Utilizing the AVI palette, you can read and write compressed AVIs in order to access additional data, such as time-stamp data, with your images.

New Sauvola Local Threshold Algorithm

In the Vision Development Module 2016, the Sauvola Local Threshold Algorithm has been added. You can use this functions as a local thresholding techniques for images where the background is not uniform.


Back to Top

3. Vision Development Module 2015

The NI Vision Development Module 2015 includes new functions and features to help you accelerate and improve your embedded machine vision applications as well as PC-based applications. Improvements include additional FPGA image processing IP, improved color palette support, feature detection and correspondence functions, and support for C/C++ Development Tools for NI Linux Real-Time, Eclipse Edition. This document provides an overview of these updates.

Additional FPGA Image Processing IP

FPGA Image Processing IP was introduced as part of the Vision Development Module 2014. This new IP allows you to accelerate your image processing with the use of FPGAs. Vision Development Module 2015 adds additional FPGA Image Processing IP. The new functions included are:

Line Profile – Returns the pixel values along a line.

Equalize – Produces a histogram equalization of an image.

Simple Edge – Finds edges along a line. This function can return the first, both first and last, or all of the edges found.

Bayer decoding using the Variable Number of Gradients (VNG) method. Prior to VDM 2015 bayer decoding using the Bilinear method was the only decoding method available for FPGA. Bilinear decoding is much faster, but VNG decoding produces higher accuracy in terms of color representation.


Figure 1: Bilear decoding vs VNG deocding.


Particle Analysis Report - This function can be used on a binary image to find first pixel coordinates, center of mass, and area of particles in the image.

Figure 2: Particle Analysis Report


The Particle Analysis Report VI also includes the ability to retain particles that are split between two image frames. This can occur when parts under camera are moving on a conveyor as shown below.

Figure 3:  Parts can be split between consecutive frames when taking images of parts moving on a conveyor.

Figure 4:  The Retain Overlap feature of the Particle Analysis Report VI allows particles that are split between consecutive frames to be correctly analyzed.

Additional Functions and Features

Support for C/C++ Development Tools for NI Linux Real-Time, Eclipse Edition

All of the functions available in LabVIEW with the Vision Development Module 2015 are now available in the C API and can deployed to NI’s Linux RT hardware using the C/C++ Development Tools for NI Linux Real-Time, Eclipse Edition. This means that you can now develop C/C++ code with image processing functions or integrate image processing code into existing C/C++ code and deploy it to National Instruments hardware with the NI Linux Real-Time operating system.

16 Bit Image Support

In Vision Development Module 2015, you now have access to the full color spectrum to display 16 bit images which is particularly useful for visualizing thermal, depth, and medical images.

Figure 5:  16 bit images are often use for depth, medical, or thermal imaging.


This feature is fully supported for displaying 16 bit images on the RT Embedded Display.

Flat Field Correction

Use the new Flat Field Correction feature to equalize the background of images with uneven lighting, surfaces that are uneven or curved, or contain sensor noise or unwanted particles due to dust on the sensor.

Figure 6:  Flat field correction can be used to minimize effects of uneven lighting, curved surfaces, or sensor noise and dust.


Note that the histograms (red line overlay) for the corrected images are much meaning that the backgrounds pixels have been equalized and are more uniform.

Feature Correspondence

The Feature Framework contains functions that enable you to detect Feature Points, create Feature Descriptions, and perform Correspondence Matching. Some possibly use cases are shown below:



Figure 7:  Feature correspondence can used in a variety of applications.


These functions ultimately allow you to extract useful feature information from images as well as match common features between two images. The ability to match common features between two images is particularly useful in applications where a template must be matched to an image that is geometrically distorted or contains objects that are translated or rotated.

Feature Extraction

Vision Development Module 2015 also gives you the ability to extract useful features using either the Histogram of Gradients (HOG) or Local Binary Patterns (LBP) method. Features extraction can be used to classify objects or textures within an image or sub image.


Figure 8:  Feature extraction can be used to build a material identification application.


The image above shows an example application where different materials that are visually similar are correctly identified using feature classification. This example (HOG-LBP Texture Classification.vi) is also included in the NI Example Finder when VDM 2015 is installed.


Back to Top

4. Vision Development Module 2014

The NI Vision Development Module 2014 includes many new features and performance enhancements. Below is an overview of the new algorithm and usability improvements and describes how these features can benefit you when you are implementing your vision system.

FPGA Image Processing IP

Many image processing algorithms can take advantage of the parallel nature FPGAs and offload the process-intensive portions of a vision application, freeing the processor to handle other tasks. The Vision Development Module 2014 includes over 50 FPGA image processing functions as well as functions to efficiently transfer images between the processor and FPGA. This enables the FPGA to be used as a coprocessor in which the processed image is sent back to the host or for the image processing to be tightly coupled with other processing and I/O on the FPGA creating a high-performance solution for applications such as visual servo control, laser tracking, and high-speed sorting.

Figure 1: Accelerate vision by offloading image processing to the FPGA.


The NI LabVIEW FPGA Module is a natural extension of the LabVIEW graphical programming environment. You can perform complex FPGA programming without using low-level languages such as VHDL. If you are familiar with LabVIEW, transitioning to LabVIEW FPGA presents only a small learning curve, which can drastically reduce development time in applications that require FPGA programming, eliminating the need for custom hardware designs. Instead of programming in HDL, you create applications on the LabVIEW block diagram, and LabVIEW FPGA synthesizes the graphical code and deploys it to FPGA hardware. 


Figure 2: LabVIEW FPGA image processing IP reduces development time.


Users can quickly prototype and develop FPGA vision applications using the NI Vision Assistant, which is included with the Vision Development Module. The Vision Assistant is a configuration-based prototyping tool that empowers developers to iterate on image processing algorithms and see how changes in parameters affect the image. Once the algorithm engineering is complete, the Vision Assistant can automatically generate a complete NI LabVIEW project including host processor VI, FPGA VI, and supporting elements such as FPGA Bayer decoding and code to transfer images between the processor and FPGA as well as the corresponding FIFOs. The FPGA code generated by the Vision Assistant is also optimized for parallel execution and users can modify the image processing algorithms using LabVIEW FPGA IP Builder, which is included with the NI LabVIEW FPGA Module.


Figure 3: The Vision Assistant reduces prototyping and development for CPU and FPGA-based image processing.


Figure 4: The Vision Assistant can generate a complete LabVIEW project with code that is ready to compile and run.


In addition to speeding up development and code generation, the Vision Assistant gives an estimate of the resource utilization of an FPGA given a specified target, such as a CompactRIO model. The information includes percentage usage of slices, LUTs, DSPs, and Block RAM not only for the entire image processing code but for each individual algorithm to give insight into which step requires the most resources.


Figure 5: The Vision Assistant provides FPGA resource utilization estimates.


1D Barcode Improvements

The NI Vision Development Module 2014 also introduces a new algorithm for locating and decoding of multiple 1D barcodes within an image. The algorithm robustly locates multiple barcodes under various lighting conditions and complex backgrounds.



Figure 6: Vision Development Module algorithm locating multiple barcodes.


Back to Top

5. Vision Development Module 2013

The NI Vision Development Module 2013 includes many new features and performance enhancements. This document provides an overview of the new algorithm and usability improvements and describes how these features can benefit you when you are implementing your vision system.

New Pattern Matching Algorithm

Pattern matching is a commonly used technique to locate regions of an image that match a known reference pattern, referred to as a template. Pattern matching algorithms are some of the most important functions in machine vision because of their use in varying applications, including alignment, gauging, and inspection. The NI Vision Development Module 2013 adds a new pattern matching algorithm called pyramidal matching, which improves performance in images with blur or low contrast.

Figure 1: Example of pattern matching with blur and low contrast

Pyramidal matching improves the computation time of pattern matching by reducing the size of the image and template. In pyramidal matching, both the image and the template are sampled to smaller spatial resolutions using Gaussian pyramids. This method samples every other pixel and thus the image and the template can both be reduced to one-fourth of their original sizes for every successive pyramid 'level'.

Figure 2: Pyramid matching uses multiple levels to quickly refine searches.

In the learning phase, the algorithm automatically computes the maximum pyramid level that can be used for the given template, and learns the data needed to represent the template and its rotated versions across all pyramid levels. The algorithm attempts to find an 'optimal' pyramid level (based on an analysis of template data) which would give the fastest and most accurate match. The algorithm then iterates through each level of the pyramid, refining the match at each stage until the full resolution is used to give the best match while still achieving a speed boost. You can also choose to refine the match candidates to one last stage of refinement to find sub-pixel accurate locations and sub-degree accurate angles. This stage relies on specially-extracted edge and pixel information from the template and employs interpolation techniques to get a highly accurate match location and angle.


Object Tracking

The NI Vision Development Module 2013 introduces a new algorithm for object tracking, which tracks the location of an object over a sequence of images to determine how it is moving relative to other objects in the image. Object tracking has many uses in application areas such as:

  • Security and surveillance - In the surveillance industry, objects of interest such as people and vehicles can be tracked. Object tracking can be used for detecting trespassing or observing anomalies like unattended baggage.
  • Traffic management - The flow of traffic can be analyzed, and collisions detected.
  • Medicine - Cells can be tracked in medical images.
  • Industry - Defective items can be detected and tracked.
  • Robotics and navigation - Robots can follow the trajectory of an object. Robotic assistance can maneuver in a factory (de-palletizing objects).
  • Human-computer interaction (HCI) - Users can be tracked in a gaming environment.
  • Object modeling - An object tracked from multiple perspectives can be used to create a partial 3D model of the object.
  • Bio-mechanics - Tracking body parts to interpret gestures or movements.

Figure 3: Example of object tracking for a traffic monitoring application

NI Vision implements two object tracking algorithms: Mean shift and EM-based mean shift. Mean shift tracks the user-defined objects by iteratively updating the location of the object while EM-based mean shift not only tracks the location but also the shape and scale of the object is adapted for each frame. Both algorithms are tolerant of gradual changes in the tracked object, including geometric transformations such as shifting, rotation, scaling, or partial occlusion of the object.


OCR Improvements

Optical Character Recognition (OCR) provides machine vision functions you can use in an application to read text or characters in an image. The NI Vision Development Module 2013 brings improvements to OCR functionality including multi-line, weak rotation tolerance, and better segmentation.

Multiline detection allows a user to set a region of interest (ROI) enclosing multiple lines of text rather than needing to specify an ROI for each expected line. Multiline uses particle analysis and clustering based on vertical overlap to detect the lines in a specified ROI. Users can explicitly set the number of lines expected or the algorithm can auto detect the number of lines and apply character segmentation to all lines. If multiple lines are detected and the number of lines expected is specified, the lines with the highest ranked classification score will be returned.

Figure 4: Multiline support reduces the need for a separate ROI for each line of text and detects the highest scoring lines.

OCR reading functionality has also been improved to support detection and reading of lines and characters with slight rotations (±20°) and differing character heights. Character segmentation refers to the process of locations and separating each character in the image from the background and other characters. This process applies to both the training and reading procedures and has significant impact upon the performance of the OCR application. OCR includes multiple threshold methods to separate the characters from the background and an AutoSplit algorithm to segment slanted, or italic, characters. A shortest segment algorithm is also implemented to ensure valid segmentation even when the characters are merged. The algorithm works in three steps:

  1. Attempt to divide the characters by applying multiple shortest cut paths.
  2. Choose the cuts that are closest to the maximum character width.
  3. Intelligently choose the cuts which segment a character correctly based on classification during reading.

Figure 5: Segmentation improvements ensure robust reading for OCR applications.


Back to Top

6. Vision Development Module 2011

Improved Maximum (Max) Clamp Feature for Metrology

The NI Vision Development Module 2011 introduces an improved clamp feature with subpixel accuracy for measuring maximum clamp distances in images. Subpixel-accurate maximum clamp measurements are useful in a range of metrology and packaging assembly applications, for example, in determining where to move tooling, such as parallel jaws mounted at the end of an industrial robot, to properly clamp and pick up parts.

Unlike the previous implementation, which used rake edge detection to detect a limited set of points along an object contour, the new implementation of this feature uses curve extraction to provide highly accurate and intuitive results. It determines whether a maximum clamp measurement is present and, if present, returns the measured distance in pixels or real-world co-ordinates along with the angle of the measurement relative to the orientation of the region of interest (ROI). 

The new feature implementation is less susceptible to errors caused by arbitrary alignment of the ROI and irrelevant sharp contrast changes in the image from noise or foreign objects. The tool also provides additional flexibility by offering input parameters to choose rising, falling or any edges as well as angle tolerances for the maximum clamp measurement relative to the ROI orientation.


Figure 1: Examples of Maximum Clamp Measurements


 New Calibration Functions

Calibration models help correct the perspective and nonlinear distortion introduced into vision systems from lenses and cameras. These models are useful for making accurate gauging measurements in real-world units, and can serve as a starting point to estimate the pose of a plane and for 3D stereovision algorithms when used to locate parts in pick-and-place robotics applications.

The Vision Development Module 2011 offers a new set of calibration tools to treat radial distortion (due to the lens) as well as tangential distortion (due to the misalignment of a CCD sensor). The new tools calculate parameters associated with specific lens and camera combinations (the distortion coefficient, optical center, and focal length) and allow these parameters to be saved for the given setups. Traditional algorithms retain information only from the calibration grid region and leave holes in the calculated calibration information of an image area, but these new calibration tools compute the intrinsic parameters of a camera-lens combination to better model the overall distortion of the setup.

In contrast with the calibration functions in earlier versions of the Vision Development Module, the new functions provide increased accuracy, model and correct for distortion in the entire image region, and boost performance by storing the distortion model parameters instead of attaching calculated calibration information to every image. They can also learn the distortion models with multiple calibration grids, which can be useful when the grid is too small to cover the entire field of view and when you need to improve the estimation of calibration parameters.

In addition, the Vision Development Module 2011 introduces the NI Calibration Training Interface, an interactive interface for calculating and storing calibration parameters and viewing results.

Figure 2: The NI Calibration Training Interface is an interactive environment for viewing calibration results and saving distortion models for reuse in applications.


Support for New High-Performance NI 177x Smart Cameras

NI Vision Development Module 2011 adds support for the high-performance family of NI 177x Smart Cameras with powerful Intel Atom processors, color and high-resolution sensor options, as well as dust- and water-proof designs. Like the entire line of NI Smart Cameras, the NI 177x models come with the reliability and determinism of a real-time OS.  The powerful 1.6GHz Intel ATOM processor provides a performance boost of up to 4x for all algorithms above the PowerPC-based NI 172x and 174x models.
Learn more at ni.com/smartcamera.

Figure 3: High-Performance NI 177x Smart Cameras


Updated .NET and C API

Several algorithms and features have been added to the .NET and LabWindows™/CVI APIs in NI Vision Development Module 2011, including the following:

The mark LabWindows is used under a license from Microsoft Corporation.  Windows is a registered trademark of Microsoft Corporation in the United States and other countries.


In addition to these new features in Version 2011 of the Vision Development Module, you can still take advantage of Version 2010 sp1 improvements in data matrix decoding, morphological reconstruction, and structural similarity measurements for image quality analysis.

 Data Matrix Decoding Improvements 

Data matrix decoding improvements in Version 2010 of the Vision Development Module provide added reliability for identification applications while improving the autodetect mode that automatically selects the best parameters for accurate and repeatable decoding.

The improved algorithms better tolerate variations between samples, skew within plane, occlusion, cluttered or streaked backgrounds, low-contrast differences, and proximity of the data matrix to the image border or edge of inspection region. Also, the updated implementation, which uses line deduction algorithms in addition to edge-based algorithms, renders the data matrix search within an image more precise and does not require ROIs to be specified by the user.

These improvements are especially useful for postal sorting, pharmaceutical packaging verification, dot printing in the semiconductor industry, and identification codes stamped on metal for the aerospace and automotive industries.

Figure 4: Examples of Data Matrix Codes (clockwise from top left): (a) with Cluttered Background; (b) with Low Contrast; (c) Occluded; (d) with Reflective Background; (e) Saturated; (f) Under Translucent Film.

Morphological Reconstruction 

Morphological reconstruction is a technique that uses dilatory construct operations to keep particles and uses erosion- based reconstruction to keep holes to reconstruct a source image based on a marker image input.   The algorithm can be applied to binary and grayscale images.  For binary images, the result of the algorithm is that objects in the source image that overlap the objects contained in the marker image are retained in the resulting image.   In grayscale images, the result of the dilation can be useful for a range of reasons, including the removal of certain features while preserving others entirely, the segmentation of image regions based on their grayscale values, H-Dome extraction and shadow removal.

This technique is useful in analyzing medical images of the body, as well as finding defects in textile manufacturing.

Image to come
Figure 5
: Examples of morphological reconstruction.


New Structural Similarity (SSIM) Method for Image Quality Analysis 

The structural similarity (SSIM) index measures the similarity between two images in a manner that is more consistent with human perception than traditional techniques like mean square error (MSE).  For example, blurred images are perceived as bad quality by the human eye, and this is consistent with results from the SSIM metric, unlike the MSE method, which claims a blurred image is similar to its focused original. 
Because of its correlation to human perception, SSIM has become an accepted part of image quality and video analysis practices for analyzing compressed video data.
Table 1 shows the results for the MSE and SSIM methods when analyzing different types of image quality defects.  SSIM results closer to 1 are considered similar, while MSE values of 0 yield the same result. 

Table 1: Comparison of SSIM and MSE Method Results for Image Similarity

  Images SSIM MSE
Original Image



Blurred Image



Dilated Image



Edge Enhanced Image



Equalized Image



Image with addition of Constant Intensity



Image compressed with JPEG



Image with Pepper Noise



Learn more: Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004.


Back to Top

7. Vision Development Module 2010

New Image Processing Steps

Vision Builder AI 2010 features new algorithms including color segmentation, texture segmentation, and contour analysis.

Detect Texture Defects Step

With the new texture segmentation step, you can detect flaws such as scratches, blemishes, dents, wrinkles, blisters, and streaks in parts with textured surfaces. Such materials can include finished goods with a textured coating as well as tiles, leather, textiles, and plastic and foam covers or surfaces that may contain cosmetic defects that stand out with respect to their textured appearances.

Figure 1. Detecting defects located on a textured region of a toy remote control.

With the new Texture Training Interface, you can easily train your classifier to recognize the expected texture, and you can determine the settings that best characterize the cosmetic defects and import them into Vision Builder AI.

Figure 2. NI Texture Training Interface with image of tile.


Inspect Contours Step

In many industrial applications, metrology measurements relating to the shape of a part, including the size and smoothness of the boundary, are used for quality control purposes. The new Inspect Contours step in Vision Builder AI 2010 makes it easier to analyze the smoothness of a contour to detect chips, burrs, indents, and other shape deformities. 

The new functions offer steps for contour creation, fitting an equation to a contour, learning and matching a reference contour in an image, computing the curvature of single contours, and comparing pairs of contours. You can obtain detailed information such as the location of the maximum deviation and maximum curvature along the object’s contour, as well as the area between the desired and obtained contours.

This method is a desirable option compared to other algorithms because it provides more accurate and more detailed information about a part’s shape, so you can make effective decisions regarding whether the part meets its specifications.

Figure 3. Example using the Inspect Contours Step in Vision Builder AI 2010.


Color Segmentation Step

Color segmentation is used to detect regions of specific colors within an image and separate these from regions containing unknown colors. Using this method, it is possible to:

  • Separate objects of different colors that appear in an inspection image to determine whether all expected colors are present or which proportions of each color are present
  • Separate colored objects or regions from background colors
  • Check whether an undesired object or feature characterized by a different color is present

For some of these use cases, color segmentation may be required to obtain regions of interest (ROIs) representing particular colors prior to running further analysis (for example, the region location, size, color, and other shape characteristics).

Color segmentation techniques apply to a range of applications involving quality and process control. Examples include checking that all crayon colors are different and present in a package and verifying that a fried or baked good does not contain under- or over-cooked regions.

Figure 4. Detecting the presence of ingredients and color of baked goods with the Color Segmentation Step for process control.


Usability Improvements and LabVIEW Integration

Vision Builder AI 2010 includes the new Custom Inspection Interface Editor for creating custom UIs.  In addition, the new version also introduces templates that make is easier to start creating an inspection from a reference template.

NI Vision Builder for Automated Inspection (AI) 2010 introduces several new methods to integrate directly with NI LabVIEW software. By combining the scalability of LabVIEW, featuring supported software add-ons and hardware, with Vision Builder AI, which simplifies the machine vision development process, you can benefit from new features for your more elaborate system architectures and accommodate more types of measurements.

For users of the Vision Builder AI Development Kit who create custom steps, please note that Vision Builder AI 2010 is compatible with steps created using LabVIEW versions including 2010.


Custom Inspection Interface Editor

Watch how you can create and benefit from your own custom user interfaces (UIs) in Vision Builder AI: Creating Custom User Interfaces in Vision Builder AI

You can now create custom user interfaces (UIs) within Vision Builder AI without leaving the configuration environment.  With your own custom UI, you can include multiple image displays, monitor the important results and measurements that are specific to your application, and even interact with and control the execution of the inspection at hand.  Creating custom inspection interfaces in Vision Builder AI is similar to creating a Front Panel in LabVIEW, as you have the ability to add a multitude of controls and indicators, and even paste in company logos to give your own look and feel to your interface. 


Inspection Templates

Vision Builder AI Inspection Templates allow you develop new applications from a starting reference template that is similar to what you are doing.  The inspection templates offered in Vision Builder 2010 relate to different acquisition and triggering architectures.

Figure 5: Inspection Templates in Vision Builder AI 2010


New Image Shared Variable

See this in action in the following webcast: LabVIEW Project Integration and Image Shared Variable

The network shared variable is a popular tool for transferring measurements and results between deployment hardware in LabVIEW applications. This type of data sharing was previously unavailable for transferring entire images over a network. The latest versions of LabVIEW and Vision Builder AI introduce the new image shared variable, a network shared variable that significantly improves integration between hardware targets by sending timestamped image data over a network. This new feature is particularly useful for viewing inspection images and results remotely. For example, a line operator can monitor multiple real-time targets simultaneously from a single user interface or log images remotely for archiving or further analysis.

Figure 6.
The two real-time vision targets shown in this LabVIEW project are running 
Vision Builder AI and communicate to the host using image shared variables.


Managing Targets Running Vision Builder AI from the LabVIEW Project

See this in action in the following webcast: LabVIEW Project Integration and Image Shared Variable

With the LabVIEW Project Explorer, you can now add, access, and configure real-time hardware targets running Vision Builder AI inspections, meaning that you can manage all real-time system hardware from a single window. With these capabilities, you can now use an NI CompactRIO or PXI system, for example, to run LabVIEW Real-Time applications in the same LabVIEW project as an NI Smart Camera or other real-time NI vision hardware running a Vision Builder AI application. In addition to viewing these hardware targets in the same project, you can access and use the network shared variables, including the new image shared variables, which are hosted on the vision target for more seamless integration.


New Vision Builder AI API for Calling Inspections from External Applications

Watch a demonstration of the Vision Builder AI API capabilities in the following webcast: Controlling Vision Builder AI Inspections from LabVIEW & NI TestStand

The latest release of Vision Builder AI also contains an API for programmatically calling and executing complete Vision Builder AI inspections from within LabVIEW. This API, which installs a functions palette in LabVIEW that you can use to directly control the Vision Builder AI engine, is effective for applications that involve synchronizing one or more vision inspections as part of a larger system. You also can use this API to run inspections directly from test executive software such as NI TestStand.

Figure 7. Use the new functions palette in LabVIEW to programmatically call and execute Vision Builder AI applications.

Figure 8: Using TestStand to programmatically call and execute a Vision Builder AI inspection.


Back to Top

8. Vision Development Module 2009

Improved Geometric Pattern-Matching Function

The new enhanced edge-based geometric pattern-matching function greatly improves the process of detecting hard-to-find contours and occluded features to ensure that your program matches patterns more accurately and misses fewer patterns, even when the object under inspection is rotated or scaled. 

The Edge-Based Geometric Pattern Matching VI uses an improved pattern-matching algorithm that is based on the curves, edges, and contours detected in the template and image. The algorithm uses a generalized Hough transform for matching curves found in the template image to the curves found in the target image.

The advantages of using the new algorithm include the following:

  • All curves in the template image are directly used. Because no assumption is made about the underlying geometric structure of the object, this method improves the process of matching patterns in objects without well-defined geometric features.
  • Because all curves are used, there is no need for you to specify which curves to use. This facilitates faster, lower maintenance implementation.   
  • You have the option to remove curves for performance benefits, which makes the new algorithm flexible and customizable.

Figure 1. New edge-based pattern matching is more powerful on contoured and overlapping objects

New Color Classification Functionality

With the new color classification function in the Vision Development Module, you can create color templates and perform inspections based on color. You can use color classification with other inspections to increase the reliability of your machine vision application or use it independently for inspections that were previously not possible.

Color classification, the process of labeling a color based on previously learned colors, is used as a machine vision tool to inspect products in industries such as automotive, food, textiles, wood, and consumer personal products. It identifies an unknown color sample of a region in the image by comparing the region’s color feature to a set of features that conceptually represent classes of known samples.

With the new Color Classification Training Interface, you can easily create color classifications and import them into NI LabVIEW software. Simply drag an ROI over a part of an image and save your newly classified color. 

Figure 2. NI Color Classification Training Interface

Listed below are a few inspection applications that use color classification:

  • Determining if cookies have been baked properly based on their color (process control)
  • Analyzing fruits and classifying the quality of the fruit based on color (quality control)
  • Determining the quality of cotton fabrics based on color
  • Reading the color codes on electronic components, mechanical parts, or medical devices

Color Image Support for More Processing Functions

With the new Vision Development Module, you can use vision algorithms on color images without converting to a noncolor supported format. The software works with RGB 32-bit, HSL 32-bit, and RGB 64-bit color formats. The algorithms featuring added support for color images with this release include:

  • Edge detection
       o   Edge tool
       o   Find straight edge
       o   Rake, spoke, concentric rake
       o   Machine vision straight edge
  • 1D bar code
  • Data matrix
  • Object character recognition (OCR)

New Multicore Optimizations

In addition to multicore support improvements at the task level for LabVIEW 2009, the latest version of the Vision Development Module optimizes individual image processing algorithms to achieve improved performance on multicore processors. The following new optimizations were added for the 2009 release: 

o    Particle analysis (includes 80 particle measurements)
o    Particle analysis report  (includes 11 of the most common particle measurements)
o    Label
o    Convex hull
o    Particle filter

Many algorithms were previously optimized for multicore processing. You can find these algorithms in the New Features section of the NI Vision Development Module 8.6 Readme.

Additional DSP Optimizations for the NI 176x Smart Cameras

The Vision Development Module 2009 implements digital signal processing (DSP) optimizations for more image processing algorithms. These optimizations result in a two to three times increase in performance on the NI 1762 and NI 1764 Smart Cameras with 720 MHz Texas Instruments DSP coprocessors.


Figure 3. NI Smart Cameras with TI DSP Coprocessors


  • Operators 
    o  Add
    o  Divide
    o  Modulo
    o  Subtract
    o  Multiply
    o  Divide
    o  And
    o  Or
    o  Xor
    o  Absolute Difference
    o  Logical Difference
  • Morphology
    o  Convolute
  • Filters
    o Nth Order
  • Thresholding 
    o Threshold
    o Multithreshold
    o Auto Binary Threshold

Previous DSP Optimizations

  • Optical character recognition (OCR)
  • Data matrix
  • Pattern matching

New Debug License for Deployment Systems

Starting with the Vision Development Module 2009, a debug deployment license option is now available alongside the run-time and full development options.  

With the debug deployment license, you can run your applications, step through your code and identify issues, and correct your vision application locally on your deployment system.

64-Bit OS Support

The Vision Development Module is the first LabVIEW add-on to natively support 64-bit LabVIEW 2009 for the Windows Vista and Windows 7 64-bit OSs.   

The 64-bit edition of the Vision Development Module is intended for applications that require large images (100 to 200 MB). Displaying and processing large images requires several large additional image buffers to be available simultaneously, and, in 32-bit OSs, this can result in error messages due to insufficient buffering in the OS, even in systems with 2 to 3 GB RAM. 

The new 64-bit edition of the Vision Development Module circumvents these memory issues with its native support for the 64-bit versions of the Windows Vista and Windows 7 OSs. The 64-bit edition of the Vision Development Module 2009 allows applications to operate on images of up to 2 GB in size.

Obtaining the 64-Bit Software

  1. The 64-bit edition of the Vision Development Module is available in the same installer media as the 32-bit version. It is on the physical media that you received with your purchase or you can download it here: Vision Development Module 2009
  2. The 64-bit edition of LabVIEW 2009 is shipped separately from the LabVIEW Platform DVD upon request only. You can request the physical media by going to ni.com/info and typing in lv64bit. Standard Service Program (SSP) subscribers can download it from the Services Resource Center.

Native .NET API to Support Visual Basic .NET and C#

Creating machine vision applications in Microsoft Visual Studio is now easier than ever.  

The Vision Development Module 2009 features a native .NET API for programming vision applications using the .NET languages Visual Basic .NET and C#. This native support reduces development time and results in more maintainable applications because it provides a more integrated interface and major improvements on the original .NET support that used an ActiveX interoperability layer.  

The Vision Development Module 2009 also features support for Microsoft Visual Studio 2008, so that you can now use Microsoft Visual Studio 2005 and 2008 with NI Measurement Studio for your vision applications. In addition, you can generate Visual Basic .NET and C# code using the NI Vision Assistant, a tool for prototyping vision applications that is included with the full module.

USB Camera Support

The Vision Acquisition Software driver package that comes included with NI Vision Development Module 2009 now natively supports any USB device with a DirectShow interface. Supported devices include cameras, webcams, microscopes, scopes, scanners, and many other consumer-grade imaging products that expose functionality through a DirectShow interface.  These devices will now populate in NI software just as IEEE 1394 and GigE Vision cameras do, and this native driver feature supports acquisition from multiple USB devices using both one-shot (snap) or continuous (grab) acquisition modes.

Figure 4. Interaction between LabVIEW, NI-IMAQdx, the DirectShow API, and a USB device's driver.

Support for CompactRIO and Single-Board RIO Platforms

NI Vision Development Module 2009 can now be deployed to CompactRIO and Single-Board RIO hardware platforms for embedded medical, industrial monitoring, and autonomous robotics applications. CompactRIO is now amongst the first programmable automation controllers (PACs) to performs vision tasks and offers a fully integrated platform for advanced measurements and control.

With the NI-IMAQdx driver in Vision Acquisition Software, you can acquire compressed images from Internet Protocol (IP) cameras. In addition, the AF-1501 frame grabber from National Instruments Alliance Partner MoviMED can acquire monochrome images from analog cameras.  IP and analog camera connectivity provide low-cost options for introducing vision into NI CompactRIO and Single-Board RIO systems.

Learn more at ni.com/crio.

Figure 5. NI Vision Development Module 2009 acquires and process images from
IP and analog cameras on CompactRIO and Single-BoardRIO systems.

Back to Top

9. Summary and Next Steps

The rise in the adoption of vision to increase quality, efficiency, and flexibility has created more demand on the performance of vision software. The new features in the NI Vision Development Module bring more performance and capabilities to engineers to help meet those demands in a large variety of industries and application areas.


Next Steps:

Download and evaluate NI Machine Vision Software

Learn more about what's new with NI Vision


Reference Material:

Vision Development Module Concepts Help


Back to Top

Bookmark & Share


Rate this document

Answered Your Question?
Yes No