Breaking News

HighPoint introduces Industry’s First Hardware Architecture for GPU-Direct NVMe Storage Panasonic Introduces the First Ultra-Telephoto Zoom Lens in the LUMIX S Series CORSAIR announces Vanguard Pro 96 and Vanguard 96 Gaming Keyboards Viltrox Spark Z3 TTL On-Camera Flash Transcend Launches Next-Gen microSD Express USD710S

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Intel Introduces "Throughput Mode" in the Intel Distribution of OpenVINO Toolkit

Intel Introduces "Throughput Mode" in the Intel Distribution of OpenVINO Toolkit

Enterprise & IT Mar 7,2019 0

The latest release of the Intel Distribution of OpenVINO Toolkit includes a CPU “throughput” mode, which is said to accelerate deep learning inference.

One of the biggest challenges to AI can be eliciting high-performance deep learning inference that runs at real-world scale, leveraging existing infrastructures. The Intel Distribution of OpenVINO toolkit (a developer tool suite that stands for Open Visual Inference and Neural Network Optimization) accelerates high-performance deep learning inference deployments.

Latency, or execution time of an inference, is critical for real-time services. Typically, approaches to minimize latency focus on the performance of single inference requests, limiting parallelism to the individual input instance. This often means that real-time inference applications cannot take advantage of the computational efficiencies that batching (combining many input images to achieve optimal throughput) provides, as high batch sizes come with a latency penalty. To address this gap, the latest release of the Intel Distribution of OpenVINO Toolkit includes a CPU “throughput” mode.

According to Intel, this new mode allows efficient parallel execution of multiple inference requests by processing them using the same CNN, greatly improving the throughput. In addition to the reuse of filter weights in convolutional operations (also available with batching), a finer execution granularity available with the new mode further improves cache utilization. Using this “throughput” mode, CPU cores are evenly distributed between parallel inference requests, following the general “parallelize the outermost loop first” rule of thumb. It also reduces the amount of scheduling/synchronization compared to a latency-oriented approach when every CNN operation is made parallelized internally over the full number of CPU cores.

Intel says that the speedup from the new mode is particularly strong on high-end servers, but also significant on other Intel architecture-based systems.

Topology\Machine Dual-Socket Intel Xeon Platinum 8180 Processor Intel Core i7-8700K Processor
mobilenet-v2 2.0x 1.2x
densenet-121 2.6x 1.2x
yolo-v3 3.0x 1.3x
se-resnext50 6.7x 1.6x

Together with general threading refactoring, also introduced in the R5 release, the toolkit does not require playing OMP_NUM_THREADS, KMP_AFFINITY and other machine-specific settings to achieve these performance improvements; they can be realized with the “out of the box” Intel Distribution of OpenVINO toolkit configuration.

To simplify benchmarking, the Intel Distribution of OpenVINO toolkit features a dedicated Benchmark App that can be used to play with the number of inference requests running in parallel from the command-line. The rule of thumb is to test up to the number of CPU cores in your machine. In addition to the number of inference requests, it is also possible to play with batch size from the command-line to find the throughput sweet spot.

Tags: deep learningArtificial IntelligenceIntel
Previous Post
AMD Ryzen 3000 Desktop Series and 3rd Generation Threadripper Coming Later This Year
Next Post
Apple Expands Workforce in Qualcomm's Hometown

Related Posts

  • Intel and NVIDIA to Jointly Develop AI Infrastructure and Personal Computing Products

  • An Intel-HP Collaboration Delivers Next-Gen AI PCs

  • New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance

  • Intel Unveils New GPUs for AI and Workstations at Computex 2025

  • G.SKILL Releases DDR5 Memory Support List for Intel 200S Boost

  • Intel and its partners release BIOS update for Intel 15th Gen to increase performance

  • Intel-AMD new motherboards announced

  • Intel at CES 2025

Latest News

HighPoint introduces Industry’s First Hardware Architecture for GPU-Direct NVMe Storage
Enterprise & IT

HighPoint introduces Industry’s First Hardware Architecture for GPU-Direct NVMe Storage

Panasonic Introduces the First Ultra-Telephoto Zoom Lens in the LUMIX S Series
Cameras

Panasonic Introduces the First Ultra-Telephoto Zoom Lens in the LUMIX S Series

CORSAIR announces Vanguard Pro 96 and Vanguard 96 Gaming Keyboards
PC components

CORSAIR announces Vanguard Pro 96 and Vanguard 96 Gaming Keyboards

Viltrox Spark Z3 TTL On-Camera Flash
Cameras

Viltrox Spark Z3 TTL On-Camera Flash

Transcend Launches Next-Gen microSD Express USD710S
Cameras

Transcend Launches Next-Gen microSD Express USD710S

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

Terramaster F8-SSD

Terramaster F8-SSD

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

be quiet! Light Base 600 LX

be quiet! Light Base 600 LX

be quiet! Pure Base 501

be quiet! Pure Base 501

Soundpeats Pop Clip

Soundpeats Pop Clip

Akaso 360 Action camera

Akaso 360 Action camera

Dragon Touch Digital Calendar

Dragon Touch Digital Calendar

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed