Breaking News

Sony Launches Alpha 7 V and FE 28-70mm f/3.5-5.6 OSS II Samsung announces Galaxy Z TriFold DeepCool Introduces CL6600 Case – A New Breakthrough in Performance Case Design KIOXIA AiSAQ and memory-centric AI innovations enable AI-based automatic image recognition for logistics processes Silicon Power Launches MP10 Magnetic 10,000mAh Power Bank

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Intel Open Sources the nGraph Compiler for Deep Learning Systems

Intel Open Sources the nGraph Compiler for Deep Learning Systems

Enterprise & IT Apr 19,2018 0

Intel's nGraph Compiler, a framework-neutral deep neural network (DNN) model compiler, is now open source, allowing support for multiple deep learning frameworks while optimizing models for multiple hardware solutions.

"Finding the right technology for AI solutions can be daunting for companies, and it's our goal to make it as easy as possible. With the nGraph Compiler, data scientists can create deep learning models without having to think about how that model needs to be adjusted across different frameworks, and its open source nature means getting access to the tools they need, quickly and easily," said Arjun Bansal, VP, AI Software, Intel.

With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices.

Currently, the nGraph Compiler supports three deep learning compute devices and six third-party deep learning frameworks: TensorFlow, MXNet, neon, PyTorch, CNTK and Caffe2. Users can run these frameworks on several devices: Intel Architecture (x86, Intel Xeon and Xeon Phi), GPU (NVIDIA cuDNN), and Intel Nervana Neural Network Processor (NNP).

When Deep Learning (DL) frameworks first emerged as the vehicle for running training and inference models, they were designed around kernels optimized for a particular device. As a result, many device details were being exposed in the model definitions, complicating the adaptability and portability of DL models to other, or more advanced, devices.

The traditional approach means that an algorithm developer faces tediousness in taking their model to an upgraded device. Enabling a model to run on a different framework is also problematic because the developer must separate the essence of the model from the performance adjustments made for the device, translate to similar ops in the new framework, and finally make the necessary changes for the preferred device configuration on the new framework.

Intel designed the nGraph library to reduce these kinds of engineering complexities. While optimized kernels for DL primitives are provided through the project and via libraries like Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN), there are also several compiler-inspired ways in which performance can be further optimized.

Tags: deep learningIntel
Previous Post
Garmin Announces Connect IQ 3.0 with New apps from Trailforks, Yelp, iHeartRadio
Next Post
Facebook Seeking to Hire Chip Designers

Related Posts

  • Intel and NVIDIA to Jointly Develop AI Infrastructure and Personal Computing Products

  • An Intel-HP Collaboration Delivers Next-Gen AI PCs

  • New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance

  • Intel Unveils New GPUs for AI and Workstations at Computex 2025

  • G.SKILL Releases DDR5 Memory Support List for Intel 200S Boost

  • Intel and its partners release BIOS update for Intel 15th Gen to increase performance

  • Intel-AMD new motherboards announced

  • Intel at CES 2025

Latest News

Sony Launches Alpha 7 V and FE 28-70mm f/3.5-5.6 OSS II
Cameras

Sony Launches Alpha 7 V and FE 28-70mm f/3.5-5.6 OSS II

Samsung announces Galaxy Z TriFold
Consumer Electronics

Samsung announces Galaxy Z TriFold

DeepCool Introduces CL6600 Case – A New Breakthrough in Performance Case Design
Cooling Systems

DeepCool Introduces CL6600 Case – A New Breakthrough in Performance Case Design

KIOXIA AiSAQ and memory-centric AI innovations enable AI-based automatic image recognition for logistics processes
Enterprise & IT

KIOXIA AiSAQ and memory-centric AI innovations enable AI-based automatic image recognition for logistics processes

Silicon Power Launches MP10 Magnetic 10,000mAh Power Bank
Consumer Electronics

Silicon Power Launches MP10 Magnetic 10,000mAh Power Bank

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

Terramaster F8-SSD

Terramaster F8-SSD

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Soundpeats Pop Clip

Soundpeats Pop Clip

Akaso 360 Action camera

Akaso 360 Action camera

Dragon Touch Digital Calendar

Dragon Touch Digital Calendar

Noctua NF-A12x25 G2 fans

Noctua NF-A12x25 G2 fans

be quiet! Pure Loop 3 280mm

be quiet! Pure Loop 3 280mm

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed