Breaking News

Z890 AORUS TACHYON ICE Dominates Global DDR5 Performance, Shattering World Record at DDR5-13530 New Vero L6 generation of the PSUs The Lockerstor Gen2 Series Brings 10-Gigabit Performance for 5-Gigabit Prices ASUS Republic of Gamers Launches ROG Matrix GeForce RTX 5090 Razer Unveils Raiju V3 Pro Elite Esports Controller For PS 5

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Google Makes Its Scalable Supercomputers for Machine Learning Publically Available

Google Makes Its Scalable Supercomputers for Machine Learning Publically Available

Enterprise & IT May 9,2019 0

Google has made publically available its Cloud TPU v2 Pods and Cloud TPU v3 Pods to help machine learning (ML) researchers, engineers, and data scientists iterate faster and train more capable machine learning models.

To accelerate the largest-scale machine learning applications, Google created custom silicon chips called Tensor Processing Units (TPUs). When assembled into multi-rack ML supercomputers called Cloud TPU Pods, these TPUs can complete ML workloads in minutes or hours that previously took days or weeks on other systems.

Today, for the first time, Google Cloud TPU v2 Pods and Cloud TPU v3 Pods are available in beta to ML researchers.

Google Cloud is providing a full spectrum of ML accelerators, including both Cloud GPUs and Cloud TPUs. Cloud TPUs offer competitive performance and cost, often training cutting-edge deep learning models faster while delivering significant savings.

While some custom silicon chips can only perform a single function, TPUs are fully programmable, which means that Cloud TPU Pods can accelerate a wide range of ML workloads, including many of the most popular deep learning models. For example, a Cloud TPU v3 Pod can train ResNet-50 (image classification) from scratch on the ImageNet dataset in just two minutes or BERT (NLP) in just 76 minutes.

A single Cloud TPU Pod can include more than 1,000 individual TPU chips which are connected by an ultra-fast, two-dimensional toroidal mesh network. The TPU software stack uses this mesh network to enable many racks of machines to be programmed as a single, giant ML supercomputer via a variety of flexible, high-level APIs.

The latest-generation Cloud TPU v3 Pods are liquid-cooled and each one delivers more than 100 petaFLOPs of computing power. In terms of raw mathematical operations per second, a Cloud TPU v3 Pod is comparable with a top 5 supercomputer worldwide (though it operates at lower numerical precision).

It’s also possible to use smaller sections of Cloud TPU Pods called “slices.” ML teams ofter develop their initial models on individual Cloud TPU devices (which are generally available) and then expand to progressively larger Cloud TPU Pod slices via both data parallelism and model parallelism to achieve greater training speed and model scale.

Tags: Machine learningTensor processing unit
Previous Post
FCC Denies China Mobile's Application to Enter the U.S. Market
Next Post
Imagination Licenses Ray Tracing Technology

Related Posts

  • Imec Uses Machine Learning Algorithms in Chip Design to Achieve cm Accuracy and Low-power Ultra Wideband Localization

  • Seagate Uses NVIDIA's AI Inference Tools to Improve Hard Drive Manufacturing

  • Intel and Georgia Tech to Mitigate Machine Learning Deception Attacks

  • Infineon, Cypress Deal Wins CFIUS Clearance

  • DARPA is Working on Mathematical Framework to Check the Safety of Autonomous Systems

  • IBM Launches New IC922 POWER9-Based Server For ML Inference

  • SiFive and CEVA Partner to Make Machine Learning Processors Mainstream

  • ASUS Thinker Edge T Single-board Computer is Equipped With a Google TPU

Latest News

Z890 AORUS TACHYON ICE Dominates Global DDR5 Performance, Shattering World Record at DDR5-13530
PC components

Z890 AORUS TACHYON ICE Dominates Global DDR5 Performance, Shattering World Record at DDR5-13530

New Vero L6 generation of the PSUs
PC components

New Vero L6 generation of the PSUs

The Lockerstor Gen2 Series Brings 10-Gigabit Performance for 5-Gigabit Prices
Enterprise & IT

The Lockerstor Gen2 Series Brings 10-Gigabit Performance for 5-Gigabit Prices

ASUS Republic of Gamers Launches ROG Matrix GeForce RTX 5090
GPUs

ASUS Republic of Gamers Launches ROG Matrix GeForce RTX 5090

Razer Unveils Raiju V3 Pro Elite Esports Controller For PS 5
Gaming

Razer Unveils Raiju V3 Pro Elite Esports Controller For PS 5

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

Terramaster F8-SSD

Terramaster F8-SSD

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Soundpeats Pop Clip

Soundpeats Pop Clip

Akaso 360 Action camera

Akaso 360 Action camera

Dragon Touch Digital Calendar

Dragon Touch Digital Calendar

Noctua NF-A12x25 G2 fans

Noctua NF-A12x25 G2 fans

be quiet! Pure Loop 3 280mm

be quiet! Pure Loop 3 280mm

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed