Microsoft Announces New AI Supercomputer
Microsoft says it has built one of the top five publicly disclosed supercomputers in the world, making new infrastructure available in Azure to train extremely large artificial intelligence models.
Built in collaboration with and exclusively for OpenAI, the supercomputer hosted in Azure was designed specifically to train that company’s AI models.
It’s a first step toward making the next generation of very large AI models and the infrastructure needed to train them available as a platform for other organizations and developers to build upon.
“The exciting thing about these models is the breadth of things they’re going to enable,” said Microsoft Chief Technical Officer Kevin Scott, who said the potential benefits extend far beyond narrow advances in one type of AI model.
Machine learning experts have historically built separate, smaller AI models that use many labeled examples to learn a single task such as translating between languages, recognizing objects, reading text to identify key points in an email or recognizing speech well enough to deliver today’s weather report when asked.
A new class of models developed by the AI research community has proven that some of those tasks can be performed better by a single massive model — one that learns from examining billions of pages of publicly available text, for example. This type of model can so deeply absorb the nuances of language, grammar, knowledge, concepts and context that it can excel at multiple tasks: summarizing a lengthy speech, moderating content in live gaming chats, finding relevant passages across thousands of legal files or even generating code from scouring GitHub.
As part of a companywide AI at Scale initiative, Microsoft has developed its own family of large AI models, the Microsoft Turing models, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics and other productivity products. Earlier this year, it also released to researchers the largest publicly available AI language model in the world, the Microsoft Turing model for natural language generation.
The goal, Microsoft says, is to make its large AI models, training optimization tools and supercomputing resources available through Azure AI services and GitHub so developers, data scientists and business customers can easily leverage the power of AI at Scale.
Training massive AI models requires advanced supercomputing infrastructure, or clusters of state-of-the-art hardware connected by high-bandwidth networks. It also needs tools to train the models across these interconnected computers.
The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server. Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, Microsoft says. Hosted in Azure, the supercomputer also benefits from all the capabilities of a robust modern cloud infrastructure, including rapid deployment, sustainable datacenters and access to Azure services.
This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now.
OpenAI’s goal is not just to pursue research breakthroughs but also to engineer and develop powerful AI technologies that other people can use, Altman said. The supercomputer developed in partnership with Microsoft was designed to accelerate that cycle.
For Microsoft's customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides access to powerful compute with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimized way.
At its Build conference, Microsoft announced that it would soon begin open sourcing its Microsoft Turing models, as well as recipes for training them in Azure Machine Learning. This will give developers access to the same family of powerful language models that the company has used to improve language understanding across its products.
It also unveiled a new version of DeepSpeed, an open source deep learning library for PyTorch that reduces the amount of computing power needed for large distributed model training. The update is significantly more efficient than the version released just three months ago and now allows people to train models more than 15 times larger and 10 times faster than they could without DeepSpeed on the same infrastructure.
Along with the DeepSpeed announcement, Microsoft announced it has added support for distributed training to the ONNX Runtime. The ONNX Runtime is an open source library designed to enable models to be portable across hardware and operating systems. To date, the ONNX Runtime has focused on high-performance inferencing; today’s update adds support for model training, as well as adding the optimizations from the DeepSpeed library, which enable performance improvements of up to 17 times over the current ONNX Runtime.
Designing AI models that might one day understand the world more like people do starts with language, a critical component to understanding human intent, making sense of the vast amount of written knowledge in the world and communicating more effortlessly.
Neural network models that can process language, which are roughly inspired by our understanding of the human brain, aren’t new. But these deep learning models are now far more sophisticated than earlier versions and are rapidly escalating in size.
A year ago, the largest models had 1 billion parameters, each loosely equivalent to a synaptic connection in the brain. The Microsoft Turing model for natural language generation now stands as the world’s largest publicly available language AI model with 17 billion parameters.
This new class of models learns differently than supervised learning models that rely on meticulously labeled human-generated data to teach an AI system to recognize a cat or determine whether the answer to a question makes sense.
In what’s known as “self-supervised” learning, these AI models can learn about language by examining billions of pages of publicly available documents on the internet — Wikipedia entries, self-published books, instruction manuals, history lessons, human resources guidelines. In something like a giant game of Mad Libs, words or sentences are removed, and the model has to predict the missing pieces based on the words around it.
As the model does this billions of times, it gets very good at perceiving how words relate to each other. This results in a rich understanding of grammar, concepts, contextual relationships and other building blocks of language. It also allows the same model to transfer lessons learned across many different language tasks, from document understanding to answering questions to creating conversational bots.
One advantage to the next generation of large AI models is that they only need to be trained once with massive amounts of data and supercomputing resources. A company can take a “pre-trained” model and simply fine tune for different tasks with much smaller datasets and resources.
The Microsoft Turing model for natural language understanding, for instance, has been used across the company to improve a wide range of productivity offerings over the last year. It has advanced caption generation and question answering in Bing, improving answers to search questions in some markets by up to 125 percent.
In Office, the same model has fueled advances in the smart find feature enabling easier searches in Word, the Key Insights feature that extracts important sentences to quickly locate key points in Word and in Outlook’s Suggested replies feature that automatically generates possible responses to an email. Dynamics 365 Sales Insights also uses it to suggest actions to a seller based on interactions with customers.
Microsoft is also exploring large-scale AI models that can learn in a generalized way across text, images and video. That could help with automatic captioning of images for accessibility in Office, for instance, or improve the ways people search Bing by understanding what’s inside images and videos.
To train its own models, Microsoft had to develop its own suite of techniques and optimization tools, many of which are now available in the DeepSpeed PyTorch library and ONNX Runtime. These allow people to train very large AI models across many computing clusters and also to squeeze more computing power from the hardware.
That requires partitioning a large AI model into its many layers and distributing those layers across different machines, a process called model parallelism. In a process called data parallelism, Microsoft’s optimization tools also split the huge amount of training data into batches that are used to train multiple instances of the model across the cluster, which are then periodically averaged to produce a single model.
The efficiencies that Microsoft researchers and engineers have achieved in this kind of distributed training will make using large-scale AI models much more resource efficient and cost-effective for everyone, Microsoft says.