Intel Invests In Exa-scale Supercomputers
Intel has invested in collaborations with institutions that specialize in high performance computing with Exa-scale performance levels. Three Intel labs, all members of the Intel Labs Europe network, now exclusively focus on Exa-scale computing research, Intel says.
In the past year, Intel has launched three new research centers focused on different aspects of the same challenge: developing supercomputers with Exa-scale performance levels. That means a billion billion computations per second. To put that in context, if you had all ~6.9 billion people on earth scribbling out math problems at a rate of one per second, it would still take over four and a half years to calculate what an Exa-scale supercomputer could do in a single second. Exa-scale was the hot topic this week at the Intel European Research and Innovation Conference (EPIC), which was held in Braunschweig, Germany, September 21 & 22.
According to Prof. Thomas Lippert, director of the J?lich Supercomputing Center in Germany, these massive systems could arrive by the end of this decade.
Intel Sr. Fellow Steve Pawlowski, head of Central Architecture and Planning, predicted that demand for high performance computing will continue to rise, driven by computationally intensive tasks such as analyzing the human genome and the creation of climate models that can accurately predict weather patterns. But he emphasized that Exa-scale levels of performance can?t be achieved with today?s techniques, so new technologies must be developed. Pawlowski identified several major challenges facing Exa-scale researchers: energy-efficiency, parallelization, reliability, memory, storage capacity and bandwidth. Moreover, he said that it is important that hardware and software be woven together with a unified programming model.
Meeting these challenges will require a modular, cluster-based design that is both scalable and resilient, according to Prof. Lippert. He noted that the JUROPA supercomputer at his center in J?lich, currently the 14th fastest computer in the world, consists of a cluster of about 15,000 processor cores. He predicted that a future exa-scale systems could be comprised of as many as 10 million cores - a major challenge in terms of power consumption and data communication amongst all the cores.
To achieve all of this, Intel has invested in collaborations with institutions that specialize in high performance computing. Three Intel labs, all members of the Intel Labs Europe network, now exclusively focus on Exa-scale computing research. These include the EXACluster Laboratory in J?lich, Germany (which collaborates closely with Prof. Lippert?s center), the Exascale Computing Research Center in Paris, France and the ExaScience Lab in Leuven, Belgium.
At the same time, researchers are developing technologies for the future many-core microprocessors that will one day be at the heart of these clusters.
Last December, Intel Labs demonstrated the latest concept vehicle to emerge from this program, the 48-core Single-chip Cloud Computer. At the time, our CTO Justin Rattner also announced that we would make this experimental chip available to dozens of researchers worldwide, and even highlighted an early example collaboration via a demo presented at Microsoft Research.
Since then Intel has been working to make good on this commitment, soliciting and reviewing over 200 research proposals from academic and industry researchers around the globe, engineering a development platform suitable for external distribution, and even building a small 'datacenter' of a few dozen systems that can be accessed remotely - a cloud-based option for research on an architecture that itself was designed as a microcosm of a cloud datacenter.
To this end, at the celebration of the 10th anniversary of the Intel R&D site in Braunschweig, Germany (whose researchers co-developed the SCC), Intel officially unveiled the Many-core Applications Research Community, or MARC for short. Under the new MARC program, the academic and industry researchers whose proposals were accepted will be able to use the SCC as a platform for next-generation software research. MARC will provide them with a new tool to solve challenges in parallel programming and application development that, hopefully, will in turn lead to dramatic new computing experiences for people and business in the future.
As of today, MARC consists of 51 research projects from 38 institutions worldwide. Aside from Microsoft Research, a few examples are the Karlsruhe Institute for Technology (KIT), the Technical University of Braunschweig, the University of Oxford, ETH Zurich, the Barcelona Supercomputing Center, the University of Edinburgh, the University of Texas, Purdue University, and the University of California San Diego.
Although MARC has been launched with an initial focus on the SCC (Single-chip Cloud Computer) concept vehicle, Intel hopes that the community itself proves to be as valuable as the chip. As such, Intel will explore sharing other hardware and software research platforms over time.
This research is part of an overarching effort to continue scaling processor capabilities while keeping power consumption low. With a wealth of data quickly accumulating across the internet, from tiny tweets to high-res video feeds, from customer data warehouses to medical imaging repositories ? Intel will need these powerful parallel processors to sort and analyse this data flood in real time.
According to Prof. Thomas Lippert, director of the J?lich Supercomputing Center in Germany, these massive systems could arrive by the end of this decade.
Intel Sr. Fellow Steve Pawlowski, head of Central Architecture and Planning, predicted that demand for high performance computing will continue to rise, driven by computationally intensive tasks such as analyzing the human genome and the creation of climate models that can accurately predict weather patterns. But he emphasized that Exa-scale levels of performance can?t be achieved with today?s techniques, so new technologies must be developed. Pawlowski identified several major challenges facing Exa-scale researchers: energy-efficiency, parallelization, reliability, memory, storage capacity and bandwidth. Moreover, he said that it is important that hardware and software be woven together with a unified programming model.
Meeting these challenges will require a modular, cluster-based design that is both scalable and resilient, according to Prof. Lippert. He noted that the JUROPA supercomputer at his center in J?lich, currently the 14th fastest computer in the world, consists of a cluster of about 15,000 processor cores. He predicted that a future exa-scale systems could be comprised of as many as 10 million cores - a major challenge in terms of power consumption and data communication amongst all the cores.
To achieve all of this, Intel has invested in collaborations with institutions that specialize in high performance computing. Three Intel labs, all members of the Intel Labs Europe network, now exclusively focus on Exa-scale computing research. These include the EXACluster Laboratory in J?lich, Germany (which collaborates closely with Prof. Lippert?s center), the Exascale Computing Research Center in Paris, France and the ExaScience Lab in Leuven, Belgium.
At the same time, researchers are developing technologies for the future many-core microprocessors that will one day be at the heart of these clusters.
Last December, Intel Labs demonstrated the latest concept vehicle to emerge from this program, the 48-core Single-chip Cloud Computer. At the time, our CTO Justin Rattner also announced that we would make this experimental chip available to dozens of researchers worldwide, and even highlighted an early example collaboration via a demo presented at Microsoft Research.
Since then Intel has been working to make good on this commitment, soliciting and reviewing over 200 research proposals from academic and industry researchers around the globe, engineering a development platform suitable for external distribution, and even building a small 'datacenter' of a few dozen systems that can be accessed remotely - a cloud-based option for research on an architecture that itself was designed as a microcosm of a cloud datacenter.
To this end, at the celebration of the 10th anniversary of the Intel R&D site in Braunschweig, Germany (whose researchers co-developed the SCC), Intel officially unveiled the Many-core Applications Research Community, or MARC for short. Under the new MARC program, the academic and industry researchers whose proposals were accepted will be able to use the SCC as a platform for next-generation software research. MARC will provide them with a new tool to solve challenges in parallel programming and application development that, hopefully, will in turn lead to dramatic new computing experiences for people and business in the future.
As of today, MARC consists of 51 research projects from 38 institutions worldwide. Aside from Microsoft Research, a few examples are the Karlsruhe Institute for Technology (KIT), the Technical University of Braunschweig, the University of Oxford, ETH Zurich, the Barcelona Supercomputing Center, the University of Edinburgh, the University of Texas, Purdue University, and the University of California San Diego.
Although MARC has been launched with an initial focus on the SCC (Single-chip Cloud Computer) concept vehicle, Intel hopes that the community itself proves to be as valuable as the chip. As such, Intel will explore sharing other hardware and software research platforms over time.
This research is part of an overarching effort to continue scaling processor capabilities while keeping power consumption low. With a wealth of data quickly accumulating across the internet, from tiny tweets to high-res video feeds, from customer data warehouses to medical imaging repositories ? Intel will need these powerful parallel processors to sort and analyse this data flood in real time.