Installation

The Intel® Distribution of OpenVINO™ toolkit for Linux*:

  • Enables CNN-based deep learning inference on the edge.

  • Supports heterogeneous execution across Intel® CPU, Intel® Integrated Graphics, Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.

  • Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels.

  • Includes optimized calls for computer vision standards including OpenCV*, OpenCL™, and OpenVX*.

Included with the Installation and installed by default:

Component

Description

This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.

This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications.

Drivers and runtimes for OpenCL™ version 2.1

Enables OpenCL on the GPU/CPU for Intel® processors

Intel® Media SDK

Offers access to hardware accelerated video codecs and frame processing

OpenCV* community version compiled for Intel® hardware. Includes PVL libraries for computer vision

OpenVX* version 1.1

Intel's implementation of OpenVX* 1.1 optimized for running on Intel® hardware (CPU, GPU, IPU)

A set of simple console applications demonstrating how to use the Inference Engine in your applications

Last updated

Was this helpful?