Let's Talk About NVMe
We take a look at what Non-Volatile Memory (NVM) Express (NVMe) is, what drove the need for it and how it will eventually find its way to the data center. Today's applications require faster server processors, more compute cores, more memory and even more storage resources.
Fast applications rely on having frequently accessed data as close as possible to the processor itself. Servers are leveraging larger amounts of faster local storage to augment more traditional external shared storage, thereby enabling fast, server storage hardware and software.
PCIe, the interconnect bus closest to the server CPU, has evolved to provide more bandwidth and lower latency. Improvements to PCIe Generation 3 (Gen3) have made it capable of supporting faster processors with more cores and more traffic, satisfying the needs of faster applications.
Enterprise applications rely on high-performance servers that access NVM flash Solid State Device (SSD) storage via fast, low-latency I/O roadways (e.g. PCIe) as efficiently as possible. But, legacy server storage I/O software protocols and interfaces such as AHCI (SATA) and serial attached SCSI (SAS) are not capable of unlocking that full potential.
Fast hardware requires fast software and vice versa. PCIe is currently the fastest I/O data highway available. However, the software protocols defining the traffic flow (the rules of the road) need improvement. Due to these historical protocol limitations, applications are not able to fully utilize available hardware resources. Which leads to the NVMe.
Leveraging PCIe, NVMe enables modern applications to reach their potential using high-performance servers with local flash storage via fast I/O data highways. While modern I/O highways (PCIe) and devices (flash SSD) improved, a new, optimized and efficient protocol (NVMe) is needed to control I/O data traffic flow at breakneck speed.
Note that NVMe does not replace SAS or SATA, they all can and will co-exist for years to come, enabling different tiers of server storage I/O performance tailored to different requirements inside the same platform aligning applicable technology (SATA, SAS and NVMe) to meet different performance and cost parameters.
The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. In addition to enabling more IOPs at a lower latency, NVMe also unlocks the bandwidth of PCIe and associated NVM flash SSD storage to move more data quicker.
Another benefit of NVMe is all of the I/O improvements (more work being done, data moved and with less wait time) accomplished using less processor CPU time.
Similar to the way modernized vehicle traffic flow protocols on a highway reduce congestion (wait time and latency), NVMe unlocks the potential of NAND flash SSDs via the most effective use of the PCIe I/O data highway. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.
NVMe is met in various form factors, including PCIe Add-in-Cards (AiC), U.2 (2.5" drive form factor) PCIe x4 based devices using 8639 connectors co-existing with SAS and SATA devices and M.2 mini-cards for inside servers, workstations or other devices.
For some, this means initial NVMe deployment will be M.2, U.2 or AiC devices installed inside of servers, workstations or appliances. For others, it means those devices being deployed inside software defined storage systems and appliances accessed via traditional SAS, SATA, iSCSI, Fibre Channel, NAS or object interfaces and protocols. Also, watch for additional storage systems and shared NVM, Storage Class Memory (SCM) and SSD based direct attached JBOSS (Just a Bunch of SSD Storage) that also use NVMe as the server to storage interface.
NVMe provides both flexibility and compatibility enabling it to be at the 'top tier' of storage access and take full advantage of the inherent speed and low latency of flash and multi-core processor servers for fast applications. NVMe removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished.
The NVMe benefit is that applications can process more (transactions per second (TPS), files, frames, videos, images, objects or other items) in a given amount of time, spend less time waiting, and use less CPU overhead.