Breaking News

Arctic announces Senza AI 370 Under Desk PC for AI Applications CORSAIR Announces the Airflow-focused 3200D Mid Tower for Ambitious DIY PC Builds Silicon Power Launches Enterprise-Grade DDR5 RDIMM to Accelerate AI Workloads World Backup Day 2026: A Backup Doesn’t Always Need to be in the Cloud Sharkoon announces S100 ARGB AIO Cooler

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Google Makes Its Scalable Supercomputers for Machine Learning Publically Available

Google Makes Its Scalable Supercomputers for Machine Learning Publically Available

Enterprise & IT May 9,2019 0

Google has made publically available its Cloud TPU v2 Pods and Cloud TPU v3 Pods to help machine learning (ML) researchers, engineers, and data scientists iterate faster and train more capable machine learning models.

To accelerate the largest-scale machine learning applications, Google created custom silicon chips called Tensor Processing Units (TPUs). When assembled into multi-rack ML supercomputers called Cloud TPU Pods, these TPUs can complete ML workloads in minutes or hours that previously took days or weeks on other systems.

Today, for the first time, Google Cloud TPU v2 Pods and Cloud TPU v3 Pods are available in beta to ML researchers.

Google Cloud is providing a full spectrum of ML accelerators, including both Cloud GPUs and Cloud TPUs. Cloud TPUs offer competitive performance and cost, often training cutting-edge deep learning models faster while delivering significant savings.

While some custom silicon chips can only perform a single function, TPUs are fully programmable, which means that Cloud TPU Pods can accelerate a wide range of ML workloads, including many of the most popular deep learning models. For example, a Cloud TPU v3 Pod can train ResNet-50 (image classification) from scratch on the ImageNet dataset in just two minutes or BERT (NLP) in just 76 minutes.

A single Cloud TPU Pod can include more than 1,000 individual TPU chips which are connected by an ultra-fast, two-dimensional toroidal mesh network. The TPU software stack uses this mesh network to enable many racks of machines to be programmed as a single, giant ML supercomputer via a variety of flexible, high-level APIs.

The latest-generation Cloud TPU v3 Pods are liquid-cooled and each one delivers more than 100 petaFLOPs of computing power. In terms of raw mathematical operations per second, a Cloud TPU v3 Pod is comparable with a top 5 supercomputer worldwide (though it operates at lower numerical precision).

It’s also possible to use smaller sections of Cloud TPU Pods called “slices.” ML teams ofter develop their initial models on individual Cloud TPU devices (which are generally available) and then expand to progressively larger Cloud TPU Pod slices via both data parallelism and model parallelism to achieve greater training speed and model scale.

Tags: Machine learningTensor processing unit
Previous Post
FCC Denies China Mobile's Application to Enter the U.S. Market
Next Post
Imagination Licenses Ray Tracing Technology

Related Posts

  • Imec Uses Machine Learning Algorithms in Chip Design to Achieve cm Accuracy and Low-power Ultra Wideband Localization

  • Seagate Uses NVIDIA's AI Inference Tools to Improve Hard Drive Manufacturing

  • Intel and Georgia Tech to Mitigate Machine Learning Deception Attacks

  • Infineon, Cypress Deal Wins CFIUS Clearance

  • DARPA is Working on Mathematical Framework to Check the Safety of Autonomous Systems

  • IBM Launches New IC922 POWER9-Based Server For ML Inference

  • SiFive and CEVA Partner to Make Machine Learning Processors Mainstream

  • ASUS Thinker Edge T Single-board Computer is Equipped With a Google TPU

Latest News

Arctic announces Senza AI 370 Under Desk PC for AI Applications
Consumer Electronics

Arctic announces Senza AI 370 Under Desk PC for AI Applications

CORSAIR Announces the Airflow-focused 3200D Mid Tower for Ambitious DIY PC Builds
Cooling Systems

CORSAIR Announces the Airflow-focused 3200D Mid Tower for Ambitious DIY PC Builds

Silicon Power Launches Enterprise-Grade DDR5 RDIMM to Accelerate AI Workloads
Enterprise & IT

Silicon Power Launches Enterprise-Grade DDR5 RDIMM to Accelerate AI Workloads

World Backup Day 2026: A Backup Doesn’t Always Need to be in the Cloud
Enterprise & IT

World Backup Day 2026: A Backup Doesn’t Always Need to be in the Cloud

Sharkoon announces S100 ARGB AIO Cooler
Cooling Systems

Sharkoon announces S100 ARGB AIO Cooler

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

Terramaster F8-SSD

Terramaster F8-SSD

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Akaso 360 Action camera

Akaso 360 Action camera

Dragon Touch Digital Calendar

Dragon Touch Digital Calendar

be quiet! Pure Loop 3 280mm

be quiet! Pure Loop 3 280mm

Noctua NF-A12x25 G2 fans

Noctua NF-A12x25 G2 fans

Arctic Liquid Freezer III 360 Pro Argb

Arctic Liquid Freezer III 360 Pro Argb

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed