Deep Learning
- Updated2025-11-25
- 1 minute(s) read
Introduction
Vision Development Module supports loading and executing third party Deep Learning framework models. The models from the following Deep Learning frameworks are supported.
- TensorFlow Inference Engine
- OpenVINO™ Inference Engine
The Deep Learning Inference Engines enable user to:
- Load pre-trained Deep Learning Models into Software and Hardware ecosystem
- Run loaded models in Windows and Real Time targets
- Supply Vision Image and LabVIEW data to learned models
Supported Platforms
The following platforms are supported:
Development Environments
- LabVIEW 64-bit
TensorFlow Runtime and Real Time Targets Support:
- Windows 64-bit
- NI Linux RT 64-bit
OpenVINO™ Runtime and Real Time Targets Support:
- Windows 64-bit (Windows 7 Embedded Standard is not supported)
- NI Linux RT 64-bit
When to Use
The Deep Learning Inference Engines provide the ability to load and execute third party framework models. These functions can be used when there are pre-trained models present and there is a need to use them along with other Vision functions. They can also be deployed in Real Time targets. The prerequisites for using the Deep Learning Inference Engines are:
- Pre-trained models from supported libraries
- The model must be a Frozen Model or a Saved Model