Saturday, February 24, 2018
Submit your own News for
inclusion in our Site.
Click here...
Breaking News
IBM Researchers Talk About the Future of EUV at SPIE
Xiaomi and Microsoft Expand Their Collaboration in cloud, Devices and AI Areas
Google's Augmented Reality SDK ARCore 1.0 Released, Google Lens Updated
Google Assistant is Going Global
TEAC Releases New Reference Series Hi-Res Audio Models
First Affordable Android Go Smartphones Coming Next Week
Samsung Breaks Ground on New EUV Line in Hwaseong
Samsung Max Android Application Offering Mobile Data Saving Mode and Privacy
Active Discussions
Which of these DVD media are the best, most durable?
How to back up a PS2 DL game
Copy a protected DVD?
roxio issues with xp pro
Help make DVDInfoPro better with dvdinfomantis!!!
menu making
Optiarc AD-7260S review
cdrw trouble
 Home > News > PC Parts > Intel P...
Last 7 Days News : SU MO TU WE TH FR SA All News

Wednesday, November 16, 2016
Intel Prepares its AI Strategy, Announces New Xeon Chips And An FPGA Card

Intel is showing at the Supercomputing 2016 show this week two new versions of its Xeon processor and a new FPGA card for deep learning.

AI is all around us, from the commonplace (talk-to-text, photo tagging, fraud detection) to the cutting edge (precision medicine, injury prediction, autonomous cars). The growth of data, better algorithms, and faster compute capabilities is leading to this revolution in artificial intelligence.

Machine learning, and its subset deep learning, are key methods for the expanding field of AI. Deep learning is a set of machine learning algorithms that utilize deep neural networks, to power advanced applications, such as image recognition and computer vision, with wide-ranging use-cases across a variety of industries.

Intel continues to position Xeon Phi, a massively multicore x86 processor, as its key weapon against graphics processors from Nvidia and AMD. At IDF in August it said the Knights Mill version of Phi, the first to act as both host and accelerator, will ship in 2017.

Intel dominates the high margin market for server processors, but machine learning demands more performance on highly parallel tasks than those chips offer.

Google is already using its own ASIC to accelerate the kind of tasks in machine learning. Intel targets with a new PCI Express card using an Altera Arria FPGA. Facebook designed its own GPU server using Nvidia chips for the computationally intensive job of training neural networks.

Meanwhile, Nvidia launched its own GPU server earlier this year, and IBM and Nvidia collaborated on another one using Power processors. For its part, AMD rolled out an open software initiative for its GPUs earlier this year.

To deliver optimal solutions for each customer’s machine learning requirements, Intel offers flexible and performance optimized portfolio of AI solutions, powered by Intel Xeon processors, Intel Xeon Phi processors or systems using FPGAs.

One of the biggest challenges of implementing FPGAs is the work needed to lay out the specific circuitry for each workload and algorithm, and develop custom software interfaces for each application. To make this easier, the Intel Deep Learning Inference Accelerator (Intel DLIA) was designed to deliver the latest deep learning capabilities via FPGA technology as a turnkey solution. Intel DLIA is a complete solution, combining hardware, software, and IP into an end-to-end package that provides superior power efficiency for inference for deep learning workloads.

The Intel DLIA brings together Intel Xeon processor and an Intel Arria 10 FPGA with Intel’s software ecosystem for AI and machine learning, including frameworks such as Intel-optimized Caffe and Intel’s Math Kernel Libraries for Deep Neural Networks (Intel MKL-DNN).

Intel’s Arria 10 FPGA PCIe card will quadruple performance/watt when running so-called scoring or inference jobs when it ships next year.

The Intel Deep Learning Inference Accelerator will come with intellectual property (IP) for convolutional neural networks (CNNs), supporting targeted CNN-based topologies and variations, all reconfigurable through software.

The Intel Deep Learning Inference Accelerator will be available in early 2017.

Intel is working on support for seven machine learning frameworks, including the Neon software it acquired with Nervana.
The Nervana offering alone will include its Neon framework as part of an end-to-end solution focused on the enterprise with solution blueprints and reference platforms.

The new Knights Mill version of Xeon Phi coming next year will be optimized for the toughest AI jobs such as training neural nets. It will support mixed-precision modes, likely including the 16-bit precision work becoming widely adopted to speed the job of getting results when combing through large data sets.

Intel’s latest Xeon chips on display at the supercomputing show is the new 14nm Broadwell-class Xeon E5 2699A with a 55 Mbyte L3 cache and 22 cores running at 2.4 GHz. It sports a mere 4.8% gain over the prior chip on the Linpack benchmark popular in high performance computing.

Intel currently has no plans to support open interconnects for accelerators such as CCIX and memory such as GenZ recently launched by companies including Dell and Hewlett-Packard Enterprise.

So far, Intel has 50 large deployments for the discrete version of its Xeon Phi accelerator. It starts shipping this week a version with Omnipath, an Intel link that is an alternative to Infiniband and Ethernet.

Intel is also integrating Omnipath on Skylake, its next 14nm Xeon processor. The company is demoing the chip for the first time at the supercomputing event. The processors will ship next year and also be available in versions without Omnipath. The new processor will offer HPC Optimizations, such as Intel Advanced Vector Instructions-512 boost floating point calculations & encryption algorithms.

Omnipath is currently used in more than half of all servers supporting 100 Gbit/second links.

Intel To Invest $250 Million In Autonomous Driving        All News        Google Enhances Translation, Your Old Photos And Goes All In on Cloud Machine Learning
Global Notebook And Tablet Shipments To Decline In 2017     PC Parts News      Seagate Releases 10TB IronWolf Pro For Small-To-Midsize Enterprises

Get RSS feed Easy Print E-Mail this Message

Related News
Intel Brings 5G Connectivity to PCs
Intel Releases Updated Firmware for 6th, 7th and 8th Generation Intel Core Processors, Intel Xeon Scalable Processors
Investors and Consumers Sued Intel Over Meltdown and Spectre CPU Security Flaws
Intel SSD DC P4510 and P4511 Series Come in New Form Factors, 64-layer TLC 3D NAND
Intel Focuses on Silicon Spin Qubits for Quantum Computing
Intel Graphics Drivers Add Game Optimization Feature
Intel Raises Bug Bounty Awards, Expands Program
ISSCC: Samsung Working on 7-nm EUV SRAM, Intel Details 10-nm SRAM
New 8th Gen Intel Core i3 Processor Expands Performance Options for Thin and Light Laptops
Intel Releases Spectre Microcode Update for Skylake Chips
New Intel Xeon D-2100 Series Includes the Company's Fastest Low-power Processors
Foxconn to Invest $340m in AI venture

Most Popular News
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2018 - All rights reserved -
Privacy policy - Contact Us .