OpenVX
Last updated
Was this helpful?
Last updated
Was this helpful?
Intel's OpenVX API is delivered as part of the Open Visual Inference & Neural network Optimization (OpenVINO™) toolkit, which is a software development package for development and optimization of computer vision and image processing pipelines for Intel System-on-Chips (SoCs).
Performance:
Intel's OpenVX* implementation offers CPU kernels which are multi-threaded (with Intel® Threading Building Blocks) and vectorized (with Intel® Integrated Performance Primitives). Also, GPU support is backed by optimized OpenCL™ implementation.
To perform most read/write operations on local values, the implementation supports automatic data tiling for input, intermediate, and output data.
Extensibility:
The SDK also extends the original OpenVX standard with specific APIs and numerous Kernel extensions. Refer to the Intel's Extensions to the OpenVX* Primitives chapter for details.
The implementation also allows you to add performance efficient (for example, tiled) versions of your own algorithms to the processing pipelines, refer to the Intel's Extensions to the OpenVX* API: Advanced Tiling chapter on the CPU-efficient way and the Intel's Extensions to the OpenVX* API: OpenCL™ Custom Kernels chapter for GPU (OpenCL) specific info.
Heterogeneity:
Support for both task and data parallelism to maximize utilization of the compute resources such as CPU, GPU, and IPU.
General system-level device affinities, as well as fine-grain API for orchestrating the individual nodes via notion of the targets. Refer to the Heterogeneous Computing with the OpenVINO™ toolkit chapter for details.
NOTE: Notice that IPU support is in experimental stage. Refer to the General Note on the IPU Support section for details.
Intel's implementation of OpenVX uses a modular approach with a common runtime and individual device-specific plugins.