Displaying images is an important component of a vision application because it gives you the ability to visualize your data. Image processing and image visualization are distinct and separate elements. Image processing refers to the creation, acquisition, and analysis of images. Image visualization refers to how image data is presented and how you can interact with the visualized images. A typical imaging application uses many images in memory that the application never displays.

When to Use

Use display functions to visualize your image data, retrieve generated events and the associated data from an image display environment, select ROIs from an image interactively, and annotate the image with additional information.

Concepts

Display functions display images, set attributes of the image display environment, assign color palettes to image display environments, close image display environments, and set up and use an image browser in image display environments. Some ROI functions—a subset of the display functions—interactively define ROIs in image display environments. These ROI functions configure and display different drawing tools, detect draw events, retrieve information about the region drawn on the image display environment, and move and rotate ROIs. Nondestructive overlays display important information on top of an image without changing the values of the image pixels.

In-Depth Discussion

The following section describes the display modes available in Vision and the 16-bit grayscale display mapping methods.

Display Modes

One of the key components of displaying images is the display mode that the video adapter operates. The display mode indicates how many bits specify the color of a pixel on the display screen. Generally, the display mode available from a video adapter ranges from 8 bits to 32 bits per pixel, depending the amount of video memory available on the video adapter and the screen resolution you choose.

If you have an 8-bit display mode, a pixel can be one of 256 different colors. If you have a 16-bit display mode, a pixel can be one of 65,536 colors. In 24-bit or 32-bit display mode, the color of a pixel on the screen is encoded using 3 or 4 bytes, respectively. In these modes, information is stored using 8 bits each for the red, green, and blue components of the pixel. These modes offer the possibility to display about 16.7 million colors.

Understanding your display mode is important to understanding how Vision displays the different image types on a screen. Image processing functions often use grayscale images. Because display screen pixels are made of red, green, and blue components, the pixels of a grayscale image cannot be rendered directly.

In 24-bit or 32-bit display mode, the display adapter uses 8 bits to encode a grayscale value, offering 256 gray shades. This color resolution is sufficient to display 8-bit grayscale images. However, higher bit depth images, such as 16-bit grayscale images, are not accurately represented in 24-bit or 32-bit display mode. To display a 16-bit grayscale image, either ignore the least significant bits or use a mapping function to convert 16 bits to 8 bits.

Mapping Methods for 16-Bit Image Display

The following techniques describe how Vision converts 16-bit images to 8-bit images and displays them using mapping functions. Mapping functions evenly distribute the dynamic range of the 16-bit image to an 8-bit image.

  • Full Dynamic—The minimum intensity value of the 16-bit image is mapped to 0, and the maximum intensity value is mapped to 255. All other values in the image are mapped between 0 and 255 using the equation shown below. This mapping method is general purpose because it ensures the display of the complete dynamic range of the image. Because the minimum and maximum pixel values in an image are used to determine the full dynamic range of that image, the presence of noisy or defective pixels (for non-Class A sensors) with minimum or maximum values can affect the appearance of the displayed image. Vision uses the following technique by default:
    z = x - y v - y × 255
  • where:

    • z is the 8-bit pixel value,
    • x is the 16-bit value,
    • y is the minimum intensity value,
    • v is the maximum intensity value.
  • 90% Dynamic—The intensity corresponding to 5% of the cumulative histogram is mapped to 0, the intensity corresponding to 95% of the cumulated histogram is mapped to 255. Values in the 0 to 5% range are mapped to 0, while values in the 95 to 100% range are mapped to 255. This mapping method is more robust than the full dynamic method and is not sensitive to small aberrations in the image. This method requires the computation of the cumulative histogram or an estimate of the histogram. Refer to image analysis, for more information on histograms.
  • Given Percent Range—This method is similar to the 90% Dynamic method, except that the minimum and maximum percentages of the cumulative histogram that the software maps to 8-bit are user defined.
  • Given Range—This technique is similar to the Full Dynamic method, except that the minimum and maximum values to be mapped to 0 and 255 are user defined. You can use this method to enhance the contrast of some regions of the image by finding the minimum and maximum values of those regions and computing the histogram of those regions. A histogram of this region shows the minimum and maximum intensities of the pixels. Those values are used to stretch the dynamic range of the entire image.
  • Downshifts—This technique is based on shifts of the pixel values. This method applies a given number of right shifts to the 16-bit pixel value and displays the least significant bit. This technique truncates some of the lowest bits, which are not displayed. This method is very fast, but it reduces the real dynamic of the sensor to 8-bit sensor capabilities. It requires knowledge of the bit-depth of the imaging sensor that has been used. For example, an image acquired with a 12-bit camera should be visualized using four right shifts in order to display the eight most significant bits acquired with the camera. If you are using a National Instruments image acquisition device, this technique is the default used by Measurement & Automation Explorer (MAX).