Breaking News

SAMA Expands CPU Cooling Lineup with A60 and A40 Series Air Coolers for Gaming and Creator PCs The Lockerstor 12R Pro Gen2 and 16R Pro Gen2 are Here! TRUSTA Highlights SSD Power Efficiency for AI Servers at OCP APAC 2025 XPG Launches VALOR NANO Compact Cases with the All-New PYMCORE SFX PSU Speedlink announces illuminated mechanical 60% gaming keyboard

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Intel Prepares its AI Strategy, Announces New Xeon Chips And An FPGA Card

Intel Prepares its AI Strategy, Announces New Xeon Chips And An FPGA Card

PC components Nov 16,2016 0

Intel is showing at the Supercomputing 2016 show this week two new versions of its Xeon processor and a new FPGA card for deep learning. AI is all around us, from the commonplace (talk-to-text, photo tagging, fraud detection) to the cutting edge (precision medicine, injury prediction, autonomous cars). The growth of data, better algorithms, and faster compute capabilities is leading to this revolution in artificial intelligence.

Machine learning, and its subset deep learning, are key methods for the expanding field of AI. Deep learning is a set of machine learning algorithms that utilize deep neural networks, to power advanced applications, such as image recognition and computer vision, with wide-ranging use-cases across a variety of industries.

Intel continues to position Xeon Phi, a massively multicore x86 processor, as its key weapon against graphics processors from Nvidia and AMD. At IDF in August it said the Knights Mill version of Phi, the first to act as both host and accelerator, will ship in 2017.

Intel dominates the high margin market for server processors, but machine learning demands more performance on highly parallel tasks than those chips offer.

Google is already using its own ASIC to accelerate the kind of tasks in machine learning. Intel targets with a new PCI Express card using an Altera Arria FPGA. Facebook designed its own GPU server using Nvidia chips for the computationally intensive job of training neural networks.

Meanwhile, Nvidia launched its own GPU server earlier this year, and IBM and Nvidia collaborated on another one using Power processors. For its part, AMD rolled out an open software initiative for its GPUs earlier this year.

To deliver optimal solutions for each customer’s machine learning requirements, Intel offers flexible and performance optimized portfolio of AI solutions, powered by Intel Xeon processors, Intel Xeon Phi processors or systems using FPGAs.

One of the biggest challenges of implementing FPGAs is the work needed to lay out the specific circuitry for each workload and algorithm, and develop custom software interfaces for each application. To make this easier, the Intel Deep Learning Inference Accelerator (Intel DLIA) was designed to deliver the latest deep learning capabilities via FPGA technology as a turnkey solution. Intel DLIA is a complete solution, combining hardware, software, and IP into an end-to-end package that provides superior power efficiency for inference for deep learning workloads.

The Intel DLIA brings together Intel Xeon processor and an Intel Arria 10 FPGA with Intel’s software ecosystem for AI and machine learning, including frameworks such as Intel-optimized Caffe and Intel’s Math Kernel Libraries for Deep Neural Networks (Intel MKL-DNN).

Intel’s Arria 10 FPGA PCIe card will quadruple performance/watt when running so-called scoring or inference jobs when it ships next year.

The Intel Deep Learning Inference Accelerator will come with intellectual property (IP) for convolutional neural networks (CNNs), supporting targeted CNN-based topologies and variations, all reconfigurable through software.

The Intel Deep Learning Inference Accelerator will be available in early 2017.

Intel is working on support for seven machine learning frameworks, including the Neon software it acquired with Nervana.
The Nervana offering alone will include its Neon framework as part of an end-to-end solution focused on the enterprise with solution blueprints and reference platforms.

The new Knights Mill version of Xeon Phi coming next year will be optimized for the toughest AI jobs such as training neural nets. It will support mixed-precision modes, likely including the 16-bit precision work becoming widely adopted to speed the job of getting results when combing through large data sets.

Intel’s latest Xeon chips on display at the supercomputing show is the new 14nm Broadwell-class Xeon E5 2699A with a 55 Mbyte L3 cache and 22 cores running at 2.4 GHz. It sports a mere 4.8% gain over the prior chip on the Linpack benchmark popular in high performance computing.

Intel currently has no plans to support open interconnects for accelerators such as CCIX and memory such as GenZ recently launched by companies including Dell and Hewlett-Packard Enterprise.

So far, Intel has 50 large deployments for the discrete version of its Xeon Phi accelerator. It starts shipping this week a version with Omnipath, an Intel link that is an alternative to Infiniband and Ethernet.

Intel is also integrating Omnipath on Skylake, its next 14nm Xeon processor. The company is demoing the chip for the first time at the supercomputing event. The processors will ship next year and also be available in versions without Omnipath. The new processor will offer HPC Optimizations, such as Intel Advanced Vector Instructions-512 boost floating point calculations & encryption algorithms.

Omnipath is currently used in more than half of all servers supporting 100 Gbit/second links.

Tags: IntelArtificial Intelligence
Previous Post
Intel To Invest $250 Million In Autonomous Driving
Next Post
Google Enhances Translation, Your Old Photos And Goes All In on Cloud Machine Learning

Related Posts

  • An Intel-HP Collaboration Delivers Next-Gen AI PCs

  • New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance

  • Intel Unveils New GPUs for AI and Workstations at Computex 2025

  • G.SKILL Releases DDR5 Memory Support List for Intel 200S Boost

  • Intel and its partners release BIOS update for Intel 15th Gen to increase performance

  • Intel-AMD new motherboards announced

  • Intel at CES 2025

  • Intel Launches Arc B-Series Graphics Cards

Latest News

SAMA Expands CPU Cooling Lineup with A60 and A40 Series Air Coolers for Gaming and Creator PCs
Cooling Systems

SAMA Expands CPU Cooling Lineup with A60 and A40 Series Air Coolers for Gaming and Creator PCs

The Lockerstor 12R Pro Gen2 and 16R Pro Gen2 are Here!
Enterprise & IT

The Lockerstor 12R Pro Gen2 and 16R Pro Gen2 are Here!

TRUSTA Highlights SSD Power Efficiency for AI Servers at OCP APAC 2025
Enterprise & IT

TRUSTA Highlights SSD Power Efficiency for AI Servers at OCP APAC 2025

XPG Launches VALOR NANO Compact Cases with the All-New PYMCORE SFX PSU
Cooling Systems

XPG Launches VALOR NANO Compact Cases with the All-New PYMCORE SFX PSU

Speedlink announces illuminated mechanical 60% gaming keyboard
PC components

Speedlink announces illuminated mechanical 60% gaming keyboard

Popular Reviews

be quiet! Light Loop 360mm

be quiet! Light Loop 360mm

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Noctua NH-D15 G2

Noctua NH-D15 G2

Soundpeats Pop Clip

Soundpeats Pop Clip

be quiet! Light Base 600 LX

be quiet! Light Base 600 LX

be quiet! Pure Base 501

be quiet! Pure Base 501

Terramaster F8-SSD

Terramaster F8-SSD

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed