Thursday, April 18, 2024
Search
  
Friday, June 21, 2013
 Improving Power and Programming Are Key Factors For Advancing To Exascale Computing
You are sending an email that contains the article
and a private message for your recipient(s).
Your Name:
Your e-mail: * Required!
Recipient (e-mail): *
Subject: *
Introductory Message:
HTML/Text
(Photo: Yes/No)
(At the moment, only Text is allowed...)
 
Message Text: Key challenges the industry needs to overcome in order to reach exascale computing by the end of this decade are power and programming, according to Nvidia.

Theoretically, an exascale system - 100 times more computing capability than today's fastest systems - could be built with only x86 processors, but it would require as much as 2 gigawatts of power - the entire output of the Hoover Dam.

On the other hand, the GPUs in an exascale system built with NVIDIA Kepler K20 processors would consume about 150 megawatts. So, a hybrid system that efficiently utilizes CPUs with higher-performance GPU accelerators is the best bet to tackle the power problem.

However, the ndustry needs to look for power efficiencies in other areas.

Speaking at this week's 2013 International Supercomputing Conference (ISC) in Leipzig, Germany, NVIDIA's Chief Scientist, Bill Dally, said that reaching exascale, would require a 25x improvement in energy efficiency ? 50 gigaflops per watt vs. the 2 gigaflops per watt from today's most efficient systems.

And manufacturing process advances alone will not achieve this goal. At best, this will only deliver about a 2.2x improvement in performance per watt, leaving an energy efficiency gap of 12x that will need to be reached by other means.

Dally believes that a combination of more efficient circuit design and better processor architectures can help close the gap ? delivering 3x and 4x improvements in performance per watt, respectively.

Dally?s engineering team at NVIDIA is exploring a number of new approaches, including utilizing hierarchical register files, two-level scheduling, optimizing temporal SIMT, and other advanced techniques - all designed to maximize energy efficiency in every way possible.

Dally says that second big challenge to overcome is making it easier for developers to program these large-scale systems.

Although parallel computing is not hard - at least in Dally's view - programmers, programming tools and the architecture each need to 'play their positions.' For example, programmers should focus on designing better algorithms - and not worry about optimization or mapping. Leave that to the programming tools, which are much more effective at these types of tasks than humans.

And the architecture - well, it just needs to provide the underlying compute power, and otherwise "stay out of the way."

On top of this, Dally notes that tools and programming models need to continue improving, making it even easier for programmers to maximize both performance and energy efficiency.

Potential improvements Dally is investigating in this area include using collection-oriented programming methods, which will continue to make the process of programing large-scale machines quicker and easier.

By focusing on these areas, Dally believes that exascale computing is within our reach by the end of the decade.
 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .