Thursday, June 21, 2018
Search
  
Submit your own News for
inclusion in our Site.
Click here...
Breaking News
Samsung Details its 7nm EUV Technology
HDR10+ Technologies Unveil Licensing and Logo Certification Program
Samsung SDI Unveils New Residential ESS Module
Samsung Introduces 8TB SSD for Data Centers in NF1 Form Factor
Micron Had a Record Quarter in Terms of Revenue and Profitability
Bose Noise-Masking Sleepbuds Will Help You Sleep
Intel to Bring Silicon-Based Security to AI and Blockchain Workloads
Microsoft to Buy Bonsai to Build 'Brains' for Autonomous Systems
Active Discussions
Which of these DVD media are the best, most durable?
How to back up a PS2 DL game
Copy a protected DVD?
roxio issues with xp pro
Help make DVDInfoPro better with dvdinfomantis!!!
menu making
Optiarc AD-7260S review
cdrw trouble
 Home > News > General Computing > Intel N...
Last 7 Days News : SU MO TU WE TH FR SA All News

Thursday, May 24, 2018
Intel Nervana NNP-L1000 Neural Network Processor Coming in 2019


Intel provided updates on its newest family of Intel Nervana Neural Network Processors (NNPs) at Intel AI DevCon, the company's inaugural AI developer conference.

The Intel Nervana NNP has an explicit design goal to achieve high compute utilization and support true model parallelism with multichip interconnects.

Intel is building toward the first commercial NNP product offering, the Intel Nervana NNP-L1000 (Spring Crest), in 2019. The company anticipates the Intel Nervana NNP-L1000 to achieve 3-4 times the training performance of its first-generation Lake Crest product. Intel will also support bfloat16, a numerical format being adopted industrywide for neural networks, in the Intel Nervana NNP-L1000. Over time, Intel will be extending bfloat16 support across its AI product lines, including Intel Xeon processors and Intel FPGAs.

The company showed initial performance benchmarks on Intel's NNP family. These results come from the prototype of the Intel Nervana NNP (Lake Crest) from which the company is gathering feedback from its early partners:

  • General Matrix to Matrix Multiplication (GEMM) operations using A(1536, 2048) and B(2048, 1536) matrix sizes have achieved more than 96.4 percent compute utilization on a single chip. This represents around 38 TOP/s of actual (not theoretical) performance on a single chip. Multichip distributed GEMM operations that support model parallel training are realizing nearly linear scaling and 96.2 percent scaling efficiency for A(6144, 2048) and B(2048, 1536) matrix sizes - enabling multiple NNPs to be connected together and freeing from memory constraints of other architectures.
  • Intel is measuring 89.4 percent of unidirectional chip-to-chip efficiency of theoretical bandwidth at less than 790ns (nanoseconds) of latency and are excited to apply this to the 2.4Tb/s (terabits per second) of high bandwidth, low-latency interconnects.

All of this is happening within a single chip total power envelope of under 210 watts.



Previous
Next
Qualcomm Snapdragon 710 Mobile Platform Brings Artificial Intelligence Features to a New Tier of Smartphones        All News        Uber Ends Arizona Self-driving Program
Uber's Self-driving Recognized Pedestrian Late and Failed to Brake: NTSB     General Computing News      Uber Ends Arizona Self-driving Program

Get RSS feed Easy Print E-Mail this Message

Related News
Intel to Bring Silicon-Based Security to AI and Blockchain Workloads
Nvidia Uses AI to Produce High-quality, 240fps Slow-motion Video From 30fps Source
IBM's AI Machine Learns to Debate Humans
Google Uses Deep Learning to Predict When a Patient Will Die
Intel to Showcase AI and HPC Demos at ISC
Deep Mind's Neural Scene Rendering System Predicts 3D Surroundings Using Its Own Sensors
New IPhones to Have Intel's 5G Modems Inside
Samsung Launches Fund to Invest in AI Startups
'Lazy State' CPU Security Hole Unveiled by Intel
Computex: Intel Introduces First 5.0 GHz Processor, 28-core CPU, 5G PCs, Fastest Ever Intel Optane SSD for Mobile
Hot Chips Symposium: Nvidia's Next Generation GPU, Intel Xeon Cascade Lake Processors and More
Intel Introduces the Optane DC Persistent Memory for Data Centers

Most Popular News
 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2018 - All rights reserved -
Privacy policy - Contact Us .