Support, Resources, and Frequently Asked Questions

You can report OpenVINO toolkit issues to the GitHub repository. For Edge AI Reference Kits issues, join the discussion on the OpenVINO notebooks repository or the Intel Support Forum.

Intel offers Edge AI Reference Kits for specific AI use cases, such as Smart Meter Scanning, Real-time Anomaly Detection, and Intelligent Queue Management. More use cases are added on a frequent basis. Take advantage of real-time computer vision to create AI inference solutions using object detection, object segmentation, and future inclusions like generative AI.

 

These kits use pretrained optimized models that accelerate AI on popular platforms and include detailed documentation, how-to videos, code samples, and GitHub repositories. The kits enable you to speed up your deployment and adoption process and save time.

 

The OpenVINO toolkit accelerates the process of model compression for AI use cases and then deploying them on various hardware platforms. This speeds up the development of AI inference solutions and makes it more efficient for you to turn your ideas into applications of AI in the real world.

 

Yes. The OpenVINO toolkit compiles models to run on many different devices to give you the flexibility to write code once and deploy your model across CPUs, GPUs, VPUs, and other accelerators.

 

To get the best possible performance, it’s important to properly set up and install the current GPU drivers on your system. Use the guide on How to Install Intel GPU Drivers on Windows* and Ubuntu*.

 

Note Use the guide to install drivers and set up your system before using the OpenVINO toolkit for GPU-based AI inference solutions.

 

This guide was tested on Intel® Arc™ graphics and Intel® Data Center GPU Flex Series on systems with Ubuntu 22.04 LTS and Windows 11. To use the the OpenVINO toolkit GPU plug-in and offload inference to Intel GPUs, the Intel® Graphics Driver must be properly configured on your system.

 

The new family of discrete GPUs from Intel are not just for gaming; they can also run AI at the edge or on servers.

The plug-in architecture of the OpenVINO toolkit supports the optimization of AI inference on third-party hardware as well as Intel platforms. See the full list of supported devices here.

 

It’s optimized for performance. OpenVINO tookit runs computationally intensive, deep learning models with minimal impact to accuracy. It has features that maximize efficiency, like the AUTO device plug-in or thread scheduling on 12th gen Intel Core processors and higher.

 

The OpenVINO toolkit is highly compatible with multiple frameworks and standardized protocols. OpenVINO™ model server uses the same architecture and API as TensorFlow* Serving and KServe to make deployment more scalable for modern workloads.

The OpenVINO toolkit minimizes the time it takes to process input data to produce a prediction as an output. Decision-making is faster while your system interactions are more efficient.

 

With the Model Conversion API and the NNCF, OpenVINO offers several optimization techniques to enhance performance and reduce latency.

 

Read about the various model compression options like quantization-aware training, post-training quantization, and more in this model optimization guide.

The OpenVINO model server, part of the OpenVINO toolkit, lets you host models and efficiently deploy applications on a wide range of hardware. You can drop in AI inference acceleration without rewriting code.
 

  • Your model is made accessible over standard network protocols through a common API, which is also used by KServe.
  • Remote AI inference enables you to create lightweight clients that focus on API calls, which requires fewer updates.
  • Applications can be developed and maintained independent from the model framework, hardware, and infrastructure.
  • Easier access control to the model since the topology and weights are not exposed through the client applications.
  • More efficient horizontal and vertical AI inference scaling is afforded by this deployment structure.

By providing you with a comprehensive set of tools and resources, you can streamline workflows while optimizing AI inference and the real-world performance of AI models with OpenVINO.