Vision Development Module supports TensorFlow and OpenVINO™ Inference Engines.

User Workflow

A new Deep Learning Model or topology is created by the developers. This model is then trained using the data acquired from the dataset. The trained model so obtained maybe further deployed on Targets using the LabVIEW APIs provided by the Vision Development Module. For more details about the inference engines workflow, see the Development Workflow section.

Development Workflow

The following workflow is applicable in LabVIEW using Vision functions.

Supplying Input Data

Deep Learning libraries usually accepts data as a Tensor (a representation of multi-dimensional arrays). Once a model is loaded using Vision functions, it understands input and output tensor configurations for the loaded models. Supplied data to the Vision function is converted to input node tensors and fed to the model while running inference. The following table depicts input data compatibility:

NI Data Type

Tensor Data Type

Default Expected Tensor Dimension

Comment

NI Vision Image (U8, U16, I16, SGL)

Unsigned Integer 8 / 16 / Float

4 [1*X*Y*1]

Error displayed for dimension mismatch. Data is converted if tensor expects Float data type. If tensor is not Float, it must be same as Image Type.

NI Vision Image (RGB32)

Unsigned Integer 8 / 16 /Float

4 [1*X*Y*3]

Error displayed for dimension mismatch. Data is converted if tensor expects Float data type. If tensor is not Float, it must be same as Image Type.

Array (U8, I8, I16, U16, I32, I64, Float, Double)

U8, I8, I16, U16, I32, I64, Float, Double

Same as supplied

Error is displayed if there is a mismatch in dimensions or datatype.

NI Vision Image (RGB64, Complex, HSL)

-

-

Unsupported

Array (Complex, U32, U64)

-

-

Unsupported

Interpreting Output Data

The output data from the model is converted into single dimensional Float Array in LabVIEW. The dimensional information of the original data from the graph is also given out. The user needs to construct back the data from this LabVIEW output.

Tensor Data Type

NI Data Type

Comment

Array (U8, I8, I16, U16, U32, I32, U64, I64, Float, Double)

Float

Data is converted. Data loss may result for Double to Float conversion.

Frozen Model(*.pb)

The supported model file format for Frozen Model is Protocol Buffer (.pb). This format is created and maintained by Google™. If Saved Models are supplied, a folder must be provided with Protocol Buffer files and other intermediate files. This is supported only for TensorFlow.

Saved Models

These are primarily folders which must be provided with Protocol Buffer files and other intermediate files. This is supported only for TensorFlow.

Intermediate Representation (*.xml)

These are primarily xml files with graph information about the model. They are compulsorily accompanied with same named “.bin” file that contains weights and biases relevant for the defined model. This format is supported only for OpenVINO™ Deep Learning and Deployment toolkit and is maintained by Intel.

Reference Link: https://en.wikipedia.org/wiki/Protocol_Buffers

The supported LabVIEW datatypes are:

  1. NI Vision Image
    • U8, U16, RGB32, SGL
  2. LabVIEW Arrays

Model Optimizer (OpenVINO™ only)

Model Optimizer, as a part of OpenVINO™ toolkit is a cross-platform python based command line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices. The following diagram summarizes the workflow.

Note For more details about converting a model to OpenVINO model using Model Optimizer and accessing the Readme, use the command cd %NI_MO_INSTALL_PATH% or go to C:\Users\Public\Documents\National Instruments\model_optimizer\

A python script convert2ir.py is available as part of the installation which will help users convert models of different topologies easily without requiring to go into complex set of parameters that model optimizer requires to convert, for example, a tensorflow model to an IR model. The model optimizer can convert TensorFlow and Caffe Models.