OpenVino Toolkit
The OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The OpenVINO™ toolkit includes the Deep Learning Deployment Toolkit (DLDT).
The OpenVINO toolkit:
Enables CNN-based deep learning inference on the edge
Supports heterogeneous execution across an Intel® CPU, Intel® Integrated Graphics, Intel® FPGA, Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2 and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
Includes optimized calls for computer vision standards, including OpenCV*, OpenCL™, and OpenVX*
The OpenVINO includes the following components:
Deep Learning Deployment Toolkit (DLDT)
Deep Learning Model Optimizer - A cross-platform command-line tool for importing models and preparing them for optimal execution with the Inference Engine. The Model Optimizer imports, converts, and optimizes models, which were trained in popular frameworks, such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX*.
Deep Learning Inference Engine - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2 and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Demos and samples - A set of simple console applications demonstrating how to use the Inference Engine in your applications
Tools - A set of simple console tools to calibrate and measure accuracy fo your models
Pre-trained models - A set of pre-trained models for learning and demo purposes or to develop deep learning software
OpenCV - OpenCV* community version compiled for Intel® hardware Includes PVL libraries for computer vision.
Drivers and runtimes for OpenCL™ version 2.1
OpenVX* - Intel's implementation of OpenVX* optimized for running on Intel® hardware (CPU, GPU, IPU).
Last updated
Was this helpful?