AMD made its debut to the virtual reality market with the announcement of the "Liquid VR", a software development kit aimed at making VR easier fof rusers and developers, connected to AMD-powered PCs. During a presentation at GDC 2015 in San Francisco, AMD described the Liquid VR initiative as a set of technologies focused on enabling VR content development for AMD hardware, improved comfort in VR applications by facilitating performance, and plug-and-play compatibility with VR headsets.
Essentially, Liquid VR allows developers and users to plug an Oculus Rift headset into a computer and start 3D rendering directly to the headset, even without Oculus' SDK.
In virtual reality, the concept of 'presence' is described as the perception of being physically present in a simulated, nonphysical world in a way that immerses the user. A key obstacle to achieving presence is addressing motion-to-photon latency, the time between when a user moves their head and when his or her eye sees an updated image reflecting that new position. Minimizing motion-to-photon latency is critical to achieving both presence and comfort, two key elements of great VR.
Reducing latency involves the entire processing pipeline, from the GPU, to the application, to the display technology in the headset. AMD GPU software and hardware subsystems are a major part of improving that latency equation, and with LiquidVR, AMD says it is helping to solve the challenge by bringing smooth, liquid-like motion and responsiveness to developers and content creators for life-like presence in VR environments powered by AMD hardware.
The first version of the SDK is available to "select" developers starting today.
Features of version 1.0 of the LiquidVR SDK include:
- Async Shaders for smooth head-tracking enabling hardware-Accelerated Time Warp, a technology that uses updated information on a user’s head position after a frame has been rendered and then warps the image to reflect the new viewpoint just before sending it to a VR headset, effectively minimizing latency between when a user turns their head and what appears on screen.
- Affinity Multi-GPU for scalable rendering, a technology that allows multiple GPUs to work together to improve frame rates in VR applications by allowing them to assign work to run on specific GPUs. Each GPU renders the viewpoint from one eye, and then composites the outputs into a single stereo 3D image. With this technology, multi-GPU configurations become ideal for high performance VR rendering, delivering high frame rates for a smoother experience.
- Latest data latch for smooth head-tracking, a programming mechanism that helps get head tracking data from the head-mounted display to the GPU as quickly as possible by binding data as close to real-time as possible, practically eliminating any API overhead and removing latency.
- Direct-to-display for attaching VR headsets, to deliver a plug-and-play virtual reality experience from an AMD Radeon graphics card to a connected VR headset, while enabling features such as booting directly to the display or using extended display features within Windows.