Intel Introduces "Throughput Mode" in the Intel Distribution of OpenVINO Toolkit
The latest release of the Intel Distribution of OpenVINO Toolkit includes a CPU “throughput” mode, which is said to accelerate deep learning inference.
One of the biggest challenges to AI can be eliciting high-performance deep learning inference that runs at real-world scale, leveraging existing infrastructures. The Intel Distribution of OpenVINO toolkit (a developer tool suite that stands for Open Visual Inference and Neural Network Optimization) accelerates high-performance deep learning inference deployments.
Latency, or execution time of an inference, is critical for real-time services. Typically, approaches to minimize latency focus on the performance of single inference requests, limiting parallelism to the individual input instance. This often means that real-time inference applications cannot take advantage of the computational efficiencies that batching (combining many input images to achieve optimal throughput) provides, as high batch sizes come with a latency penalty. To address this gap, the latest release of the Intel Distribution of OpenVINO Toolkit includes a CPU “throughput” mode.
According to Intel, this new mode allows efficient parallel execution of multiple inference requests by processing them using the same CNN, greatly improving the throughput. In addition to the reuse of filter weights in convolutional operations (also available with batching), a finer execution granularity available with the new mode further improves cache utilization. Using this “throughput” mode, CPU cores are evenly distributed between parallel inference requests, following the general “parallelize the outermost loop first” rule of thumb. It also reduces the amount of scheduling/synchronization compared to a latency-oriented approach when every CNN operation is made parallelized internally over the full number of CPU cores.
Intel says that the speedup from the new mode is particularly strong on high-end servers, but also significant on other Intel architecture-based systems.
Topology\Machine | Dual-Socket Intel Xeon Platinum 8180 Processor | Intel Core i7-8700K Processor |
---|---|---|
mobilenet-v2 | 2.0x | 1.2x |
densenet-121 | 2.6x | 1.2x |
yolo-v3 | 3.0x | 1.3x |
se-resnext50 | 6.7x | 1.6x |
Together with general threading refactoring, also introduced in the R5 release, the toolkit does not require playing OMP_NUM_THREADS, KMP_AFFINITY and other machine-specific settings to achieve these performance improvements; they can be realized with the “out of the box” Intel Distribution of OpenVINO toolkit configuration.
To simplify benchmarking, the Intel Distribution of OpenVINO toolkit features a dedicated Benchmark App that can be used to play with the number of inference requests running in parallel from the command-line. The rule of thumb is to test up to the number of CPU cores in your machine. In addition to the number of inference requests, it is also possible to play with batch size from the command-line to find the throughput sweet spot.