Nvidia Unveils CUDA
NVIDIA Computing Architecture Enables Data Processing on the GPU for Next-Generation Commercial Applications, Technical Computing, and Advanced Gaming
NVIDIA today unveiled NVIDIA CUDA technology, a fundamentally new architecture for computing on NVIDIA graphics processing units (GPUs), and the industry's first C-compiler development environment for the GPU.
GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems up to 100 times faster than traditional approaches. This architecture is complemented by another first - the NVIDIA C-compiler for the GPU. This complete development environment gives developers the tools they need to solve new problems in computation-intensive applications such as product design, data analysis, technical computing, and game physics.
Available today on the new GeForce 8800 graphics card and future NVIDIA Quadro Professional Graphics solutions, computing with CUDA transcends the limitations of traditional GPU stream computing by enabling GPU processor cores to communicate, synchronize, and share data.
CUDA-enabled GPUs offer dedicated features for computing, including the Parallel Data Cache, which allows 128, 1.35GHz processor cores in newest generation NVIDIA GPUs to cooperate with each other while performing intricate computations. Developers access these new features through a separate computing driver that communicates with DirectX and OpenGL, and the new NVIDIA C compiler for the GPU, which obsoletes streaming languages for GPU computing.
A CUDA-enabled GPU operates as either a flexible thread processor, where thousands of computing programs called threads work together to solve complex problems, or as a streaming processor in specific applications such as imaging where threads do not communicate. CUDA-enabled applications use the GPU for fine grained data-intensive processing, and the multi-core CPUs for complicated coarse grained tasks such as control and data management.
GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems up to 100 times faster than traditional approaches. This architecture is complemented by another first - the NVIDIA C-compiler for the GPU. This complete development environment gives developers the tools they need to solve new problems in computation-intensive applications such as product design, data analysis, technical computing, and game physics.
Available today on the new GeForce 8800 graphics card and future NVIDIA Quadro Professional Graphics solutions, computing with CUDA transcends the limitations of traditional GPU stream computing by enabling GPU processor cores to communicate, synchronize, and share data.
CUDA-enabled GPUs offer dedicated features for computing, including the Parallel Data Cache, which allows 128, 1.35GHz processor cores in newest generation NVIDIA GPUs to cooperate with each other while performing intricate computations. Developers access these new features through a separate computing driver that communicates with DirectX and OpenGL, and the new NVIDIA C compiler for the GPU, which obsoletes streaming languages for GPU computing.
A CUDA-enabled GPU operates as either a flexible thread processor, where thousands of computing programs called threads work together to solve complex problems, or as a streaming processor in specific applications such as imaging where threads do not communicate. CUDA-enabled applications use the GPU for fine grained data-intensive processing, and the multi-core CPUs for complicated coarse grained tasks such as control and data management.