Friday, March 29, 2024
Search
  
Wednesday, June 13, 2018
 Researchers Speed Intel's 3D XPoint Computer Memory
You are sending an email that contains the article
and a private message for your recipient(s).
Your Name:
Your e-mail: * Required!
Recipient (e-mail): *
Subject: *
Introductory Message:
HTML/Text
(Photo: Yes/No)
(At the moment, only Text is allowed...)
 
Message Text: Memory modules using Intel's 3D XPoint technology are on their way, and researchers in North Carolina have figured out how to make them even faster by limiting the amount of overhead needed to correct possible errors.

The new 3D XPoint memory technology is expected to eventually replace DRAM. It's non-volatile, like flash memory, so it should allow nearly instant recovery from power losses and software glitches. In addition, it is cheaper and denser than DRAM.

On the other hand, 3D XPoint memory is more expensice than DRAM in terms of both energy and the time it takes data to be written to it.

Last week at the 45th International Symposium on Computer Architecture, in Los Angeles, Yan Solihin, professor of electrical and computer engineering at North Carolina State University, came up with a way to cut down on the amount of writing needed, thus speeding up the memory.

Even in the case of writing in a nonvolatile memory, a process that prevents records from getting corrupted in the case of a crash is required. Such a crash delays a transaction, since a record containing bad data would force the program start up again.

The method of preventing such a situation is called eager persistency. However, this method requires a lot of overhead, adding about 9 percent to transaction times. That overhead includes a lot of writing to nonvolatile memory, about 21 percent more, when things are going well and nothing has crashed. Extra writing is a problem for nonvolatile memories generally, because they have a finite lifetime that's measured in writes.

The NC State method, called lazy persistency, normally requires very little, but when stuff goes wrong, it needs a bit more work to set things right. The cache keeps the most recently used data and moves, or evicts, the least recently used data to main memory.

Eager persistency also adds artificial eviction at a high rate to guard against losing data during a crash. Rather than use eager persistency's high-overhead scheme, lazy persistency just lets this existing cache system work, counting on the fact that the data in the cache will eventually be evicted to the nonvolatile XPoint memory. The difference is that lazy persistency also stores a number called a checksum, a small bit of data that can be used to determine if a larger portion of data has changed. When things do go wrong, the processor calculates checksums for data it still has and compares it to the checksum for the same data in the nonvolatile memory. If they don't match up, the processor knows it has to go back and redo its work.

"The recovery process is more complex, but the common case execution becomes a lot faster," says Solihin. And since problems are uncommon, lazy persistency soundly beats eager persistency. It adds only 1 percent to execution time instead of eager persistency's 9 percent. And it requires only 3 percent more writes to memory compared to eager's 21 percent.

 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .