Steps
Last updated
Was this helpful?
Last updated
Was this helpful?
Download the Intel® Distribution of OpenVINO™ toolkit package file from Intel® Distribution of OpenVINO™ toolkit for Linux*. Select the Intel® Distribution of OpenVINO™ toolkit for Linux package from the dropdown menu.
Open a command prompt terminal window.
Change directories to where you downloaded the Intel Distribution of OpenVINO toolkit for Linux* package file.
If you downloaded the package file to the current user's Downloads directory: cd ~/Downloads/
By default, the file is saved as l_openvino_toolkit_p_<version>.tgz
.
Unpack the .tgz file: tar -xvzf l_openvino_toolkit_p_<version>.tgz
The files are unpacked to the l_openvino_toolkit_p_<version>
directory.
Go to the l_openvino_toolkit_p_<version>
directory: cd l_openvino_toolkit_p_<version>
If you have a previous version of the Intel Distribution of OpenVINO toolkit installed, rename or delete these two directories:
/home/<user>/inference_engine_samples
/home/<user>/openvino_models
Installation Notes:
Choose an installation option and run the related script as root.
You can use either a GUI installation wizard or command-line instructions (CLI).
Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks.
Choose your installation option:
Option 1: GUI Installation Wizard:
sudo ./install_GUI.sh
Option 2: Command-Line Instructions:
sudo ./install.sh
Follow the instructions on your screen. Watch for informational messages such as the following in case you must complete additional steps:
If you select the default options, the Installation summary GUI screen looks like this:
Optional: You can choose Customize to change the installation directory or the components you want to install:
When installed as root the default installation directory for the Intel Distribution of OpenVINO is /opt/intel/openvino_<version>/
.
For simplicity, a symbolic link to the latest installation is also created: /opt/intel/openvino/
.
NOTE: The Intel® Media SDK component is always installed in the
/opt/intel/mediasdk
directory regardless of the OpenVINO installation path chosen.
A Complete screen indicates that the core components have been installed:
The first core components are installed. Continue to the next section to install additional dependencies.
These dependencies are required for:
Intel-optimized OpenCV
Deep Learning Inference Engine
Deep Learning Model Optimizer tools
Change to the install_dependencies
directory:cd /opt/intel/openvino/install_dependencies
Run a script to download and install the external software dependencies:sudo -E ./install_openvino_dependencies.sh The dependencies are installed. Continue to the next section to set your environment variables.
You must update several environment variables before you can compile and run OpenVINO™ applications. Run the following script to temporarily set your environment variables:source /opt/intel/openvino/bin/setupvars.sh
Optional: The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
Open the .bashrc
file in <user_directory>
:vi <user_directory>/.bashrc
Add this line to the end of the file:source /opt/intel/openvino/bin/setupvars.sh
Save and close the file: press the Esc key and type :wq
.
To test your change, open a new terminal. You will see [setupvars.sh] OpenVINO environment initialized
.
The environment variables are set. Continue to the next section to configure the Model Optimizer.
The Model Optimizer is a Python*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, Apache MXNet*, ONNX* and Kaldi*.
The Model Optimizer is a key component of the Intel Distribution of OpenVINO toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model:
.xml
: Describes the network topology
.bin
: Contains the weights and biases binary data
For more information about the Model Optimizer, refer to the Model Optimizer Developer Guide.
You can choose to either configure all supported frameworks at once OR configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
NOTE: Since the TensorFlow framework is not officially supported on CentOS*, the Model Optimizer for TensorFlow can't be configured and ran on those systems.
IMPORTANT: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
NOTE: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace
/opt/intel
with the directory in which you installed the software.
Option 1: Configure all supported frameworks at the same time
Go to the Model Optimizer prerequisites directory:cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
Run the script to configure the Model Optimizer for Caffe, TensorFlow, MXNet, Kaldi*, and ONNX:sudo ./install_prerequisites.sh
Option 2: Configure each framework separately
Configure individual frameworks separately ONLY if you did not select Option 1 above.
Go to the Model Optimizer prerequisites directory:cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
Run the script for your model framework. You can run more than one script:
For Caffe:sudo ./install_prerequisites_caffe.sh
For TensorFlow:sudo ./install_prerequisites_tf.sh
For MXNet:sudo ./install_prerequisites_mxnet.sh
For ONNX:sudo ./install_prerequisites_onnx.sh
For Kaldi:sudo ./install_prerequisites_kaldi.sh The Model Optimizer is configured for one or more frameworks.
You are ready to compile the samples by running the verification scripts.
IMPORTANT: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.
To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:
Go to the Inference Engine demo directory: cd /opt/intel/openvino/deployment_tools/demo
Run the Inference Pipeline verification script:./demo_security_barrier_camera.sh
This script downloads three pre-trained models IRs, builds the Security Barrier Camera Demo application and run it with the downloaded models and the car_1.bmp
image from the demo
directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
Close the image viewer window to complete the verification script.
To learn about the verification scripts, see the README.txt
file in /opt/intel/openvino/deployment_tools/demo
.
Run the Image Classification verification script:./demo_squeezenet_download_convert_run.sh
This verification script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
This verification script builds the Image Classification Sample application and run it with the car.png
image located in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories:
When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text: