OpenVX

Intel's OpenVX API is delivered as part of the Open Visual Inference & Neural network Optimization (OpenVINO™) toolkit, which is a software development package for development and optimization of computer vision and image processing pipelines for Intel System-on-Chips (SoCs).

Intel's OpenVX* Implementation: Key Features

Performance:

  • Intel's OpenVX* implementation offers CPU kernels which are multi-threaded (with Intel® Threading Building Blocks) and vectorized (with Intel® Integrated Performance Primitives). Also, GPU support is backed by optimized OpenCL™ implementation.

  • To perform most read/write operations on local values, the implementation supports automatic data tiling for input, intermediate, and output data.

Extensibility:

Heterogeneity:

  • Support for both task and data parallelism to maximize utilization of the compute resources such as CPU, GPU, and IPU.

  • General system-level device affinities, as well as fine-grain API for orchestrating the individual nodes via notion of the targets. Refer to the Heterogeneous Computing with the OpenVINO™ toolkit chapter for details.

NOTE: Notice that IPU support is in experimental stage. Refer to the General Note on the IPU Support section for details.

Intel's implementation of OpenVX uses a modular approach with a common runtime and individual device-specific plugins.

OpenVINO™ toolkit OpenVX* Architecture

Last updated

Was this helpful?