Breaking News

Razer Expands 4000 Hz Hyperpolling To Select Blackwidow Keyboards ASUS Announces ExpertCenter PN54-S1 Mini PC SCUF Gaming Introduces Valor Pro Wireless Controller for Xbox and PC PlayStation Plus Monthly Games for October 2025 Sony Unveils Sony FE 100mm F2.8 Macro GM OSS

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Toshiba Memory Develops Faster, Energy-efficient Algorithm and Hardware Architecture for Deep Learning Processing

Toshiba Memory Develops Faster, Energy-efficient Algorithm and Hardware Architecture for Deep Learning Processing

Enterprise & IT Nov 6,2018 0

Toshiba Memory Corp. has developed of a high-speed and high-energy-efficiency algorithm and hardware architecture for deep learning processing with less degradation of recognition accuracy.

The new processor for deep learning implemented on an FPGA (Field Programmable Gate Array) achieves 4 times energy efficiency compared to conventional ones, according to TMC. The announcement was made at IEEE Asian Solid-State Circuits Conference 2018 (A-SSCC 2018) in Taiwan on November 6.

Deep learning calculations generally require large amounts of multiply-accumulate (MAC) operations, which result in long calculation times and high energy consumption. Although techniques that reduce the number of bits to represent parameters (bit precision) have been proposed in order to reduce the total calculations -- one proposed algorithm reduces the bit precision down to one or two bit -- those techniques degraded the recognition accuracy. Toshiba claims that its newly developed algorithm reduces MAC operations by optimizing the bit precision of MAC operations for individual filters in each layer of a neural network. Generally, there are many filters of up to several thousands in one layer of a neural network. By using the new algorithm, Toshiba says that the MAC operations can be reduced with less degradation of recognition accuracy.

Furthermore, Toshiba developed a new hardware architecture, called bit-parallel method, which is suitable for MAC operations with different bit precision. This method divides each various bit precision into a bit one by one and can execute 1-bit operation in numerous MAC units in parallel. It significantly improves utilization efficiency of the MAC units in the processor compared to conventional MAC architectures that execute in series.

Toshiba Memory implemented ResNet50, a deep neural network generally used to benchmark deep-learning for image recognition, on an FPGA using the various bit precision and bit-parallel MAC architecture. In the case of image recognition for the image dataset of ImageNet (more than 14,000,000 images), both operation time and energy consumption were reduced by 25 % and with less recognition accuracy degradation, compared to conventional methods.

Artificial intelligence (AI) is forecast to be implemented in various devices. The developed high-speed and low-energy-consumption techniques for deep-learning processors are expected to be utilized for various edge devices like smartphones and Head Mounted Displays and datacenters. High-performance processors like GPU are important devices for high-speed operation of AI. Memory and storage are also one of the most important devices for AI which inevitably use big data.

Tags: Artificial Intelligencedeep learningToshiba Memory Corp.
Previous Post
Contract Prices for NAND Flash Chips and Wafers Keep Dropping
Next Post
Google Search Tells You Where You Vote, Shows Election Results

Related Posts

  • What Is Explainable AI?

  • Fujitsu AI-Video Recognition Technology Promotes Hand Washing Etiquette and Hygiene in the Workplace

  • PAC-MAN Recreated with AI by NVIDIA Researchers

  • Chinese Sogou Introduces 3D AI News Anchor

  • Microsoft Announces New AI Supercomputer

  • Sony and Microsoft to Create AI-powered Smart Cameras

  • Researchers Use Analog AI hardware to Support Deep Learning Inference Without Great Accuracy

  • Nvidia Unveils New Ampere Data Center Chips, Ampere Computers, and More

Latest News

Razer Expands 4000 Hz Hyperpolling To Select Blackwidow Keyboards
Gaming

Razer Expands 4000 Hz Hyperpolling To Select Blackwidow Keyboards

ASUS Announces ExpertCenter PN54-S1 Mini PC
Enterprise & IT

ASUS Announces ExpertCenter PN54-S1 Mini PC

SCUF Gaming Introduces Valor Pro Wireless Controller for Xbox and PC
Gaming

SCUF Gaming Introduces Valor Pro Wireless Controller for Xbox and PC

PlayStation Plus Monthly Games for October 2025
Gaming

PlayStation Plus Monthly Games for October 2025

Sony Unveils Sony FE 100mm F2.8 Macro GM OSS
Cameras

Sony Unveils Sony FE 100mm F2.8 Macro GM OSS

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

Terramaster F8-SSD

Terramaster F8-SSD

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

be quiet! Light Base 600 LX

be quiet! Light Base 600 LX

be quiet! Pure Base 501

be quiet! Pure Base 501

Soundpeats Pop Clip

Soundpeats Pop Clip

Akaso 360 Action camera

Akaso 360 Action camera

Dragon Touch Digital Calendar

Dragon Touch Digital Calendar

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed