Sunday, October 21, 2018
Search
  
Submit your own News for
inclusion in our Site.
Click here...
Breaking News
Google Could Charge Android Partners in Europe up to $40 per Device
Samsung Display Develops Under Panel Sensor, Fingerprint On Display AMOLEDs
The 9th Generation Intel Core i9-9900K is Actually the World's Best Gaming Processor
European Commission Approves Acquisition of GitHub by Microsoft
Samsung, LG Launch Trade-in Promotions to Help Sales Of Latest Flagship Smartphones
Fujitsu's Cooling Control Technology Reduces Datacenter Energy Consumption
Tesla Unveils new $45,000 Model 3
Micron Wants to Buy Remaining Interest in IM Flash Technologies to Advance the 3D XPoint Technology
Active Discussions
Which of these DVD media are the best, most durable?
How to back up a PS2 DL game
Copy a protected DVD?
roxio issues with xp pro
Help make DVDInfoPro better with dvdinfomantis!!!
menu making
Optiarc AD-7260S review
cdrw trouble
 Home > News > General Computing > AMD, In...
Last 7 Days News : SU MO TU WE TH FR SA All News

Tuesday, October 10, 2017
AMD, Intel, ARM, IBM and Others Support the Open Neural Network Exchange Format for AI


AMD, ARM, Huawei, IBM and Intel have announced their support for the Open Neural Network Exchange (ONNX) format, which was co-developed by Microsoft and Facebook in order to reduce friction for developing and deploying AI.

Introduced last month, the Open Neural Network Exchange (ONNX) format is a standard for representing deep learning models that enables models to be transferred between frameworks (PyTorch, Caffe2, and Cognitive Toolkit). ONNX is the first step toward an open ecosystem where AI developers can easily move between tools and choose the combination that is best for them.

Standardization is good for both the compute industry and for developers because it enables a level of interoperability between various products and frameworks, while streamlining the path from development to production.

By joining the project, Intel plans to further expand the choices developers have on top of frameworks powered by the Intel Nervana Graph library and deployment through the company's Deep Learning Deployment Toolkit.

Intel plans to enable users to convert ONNX models to and from Intel Nervana Graph models, giving users an even broader selection of choice in their deep learning toolkits.

Arm is already engaged to accelerate Caffe2 for its Arm Cortex-A CPUs as well as for Arm Mali GPU-based devices which currently use the Facebook application.


Previous
Next
QLC NAND Flash to Succeed TLC NAND Next Year        All News        Intel Delivers 17-Qubit Superconducting Chip with Advanced Packaging
Apple Enters into Deal Content With Spielberg, NBCUniversal: report     General Computing News      Intel Delivers 17-Qubit Superconducting Chip with Advanced Packaging

Get RSS feed Easy Print E-Mail this Message

Related News
Samsung to Acquire Zhilabs to Expand AI-Based Automation Portfolio
Micron Announces New $100 Million Venture Investment in AI
New Intel Vision Accelerator Solutions Speed Up Deep Learning and Artificial Intelligence on Edge Devices
Huawei Unveils AI Strategy and New Chips
Microsoft's Mobile Phone Keyboard SwiftKey Translates As You Text
Microsoft Focuses on Security and Brings AI to the Masses at Ignite 2018
Samsung AI Forum Underlines the Importance of Unsupervised Learning
Facebook Confirms Internal Silicon Team
Cisco Unveils Server for Artificial Intelligence and Machine Learning
Samsung Opens a New AI Center in New York City
New Google Tool Spots Child Abuse in Photos
Broadcom to Design 7-nm AI processor For Wave: report

Most Popular News
 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2018 - All rights reserved -
Privacy policy - Contact Us .