The series so far:
- Storage 101: Welcome to the Wonderful World of Storage
- Storage 101: The Language of Storage
- Storage 101: Understanding the Hard-Disk Drive
- Storage 101: Understanding the NAND Flash Solid State Drive
- Storage 101: Data Center Storage Configurations
- Storage 101: Modern Storage Technologies
- Storage 101: Convergence and Composability
- Storage 101: Cloud Storage
- Storage 101: Data Security and Privacy
- Storage 101: The Future of Storage
- Storage 101: Monitoring storage metrics
- Storage 101: RAID
An IDC report published in November 2018 predicted that the world’s data would grow to 175 zettabytes by the year 2025. For those unaccustomed to such amounts, a zettabyte is about 1,000 exabytes, which comes to one billion terabytes or one trillion gigabytes. Given our current trajectory, we’ll likely see those predictions come true. Even if we fall short, there will still be a heap load of data.
Current storage technologies are going to have a tough time keeping up. They’re already having a tough time keeping up. With the explosion of mobile devices, followed by the influx of the Internet of Things (IoT), more data than ever is being generated—by people, by applications, by machines. The only way to derive meaning from all that data is to develop innovative high-performing, high capacity storage solutions.
Scientists are pioneering storage solutions that can support our data loads into the future. To this end, they’re searching for ways to improve NAND flash and storage class memory, while experimenting with new storage technologies. In this article—the last in my series on storage—I provide an overview of many of these efforts to give you a sense of what we might expect in the near and, with any luck, not-too-distant future.
What’s up with NAND flash?
NAND flash adoption has significant data center market share, offering substantially better performance and durability than what hard-disk drives (HDDs) are physically capable of ever achieving. As NAND’s popularity has increased, along with its densities, prices have steadily dropped, making it a more viable storage option than ever.
Yet even with these improvements, they’re not enough to meet the demands of many of today’s data volumes and workloads, which is why vendors are working hard to make solid-state drives (SSDs) that can deliver better performance and greater densities while minimizing the cost-per-GB.
The primary strategy for doing so is adding more bits per cell, more layers per chip, or a combination of both. Flash SSDs have gone from one bit per cell to two bits and then three. Now we have quad-level cell (QLC) SSDs, which squeeze four bits into each cell. Initially, QLC flash primarily targeted PCs, but that’s starting to change, with some vendors now offering QLC storage for the data center.
More bits per cell increases the need for error correction, slowing program/erase (P/E) cycles. The additional bits also decrease endurance as cells become more labile. Until significant advances are made in P/E processes such as garbage collection, enterprise QLC flash will be limited to read-intensive workloads. In the meantime, vendors are pushing ahead with more bits per cell, even developing penta-level cell (PLC) SSDs that boast five bits per cell.
At some point, adding more bits per cell will no longer be practical, which is why vendors are also adding more layers to their NAND chips, a technology referred to as 3D NAND. In this type of chip, memory cells are stacked into vertical layers to increase capacity. The first 3D NAND chips had 32 layers. Many vendors now offer SSDs with 96 layers.
In addition, several vendors are ramping up production on 128-layer SSDs, with 256-layer devices on the horizon. Devices featuring 500 and even 800 layers or more are forecast. But additional layers mean thinner materials, amplifying manufacturing challenges and costs. The cost-per-GB is unlikely to decline as quickly as it has been without novel technological advances.
Who’s invading the flash space?
While vendors continue to enhance their NAND flash offerings, some are also investing in technologies that could eventually replace flash or be used in conjunction with flash to create a hybrid solution. One of these is Intel’s Optane DC SSD, which is based on the 3D XPoint architecture, a storage-class memory (SCM) technology developed by Intel in partnership with Micron.
The Optane DC SSD provides greater throughput and lower latency than a traditional flash SSD, including Intel’s own line of enterprise flash storage. IBM is now working on its second generation of the Optane DC SSD, offering hints that it might nearly double the speed of its first-gen implementation.
Not to be outdone, Samsung now offers its own alternative to traditional NAND flash—the Z-SSD drive (or Z-NAND). Although the Z-SSD is based on NAND technologies, it offers a unique circuit design and controller that delivers much better performance. In fact, the Z-SSD is often described as an SCM device and is considered Samsung’s answer to Intel’s Optane DC SSD.
Micron has also released an SSD built on the XPoint architecture—the X100 NVMe SSD. Both Micron and Samsung appear to be planning their next generation of flash alternatives. But they’ve released few details about the devices or how they’ll perform.
In the meantime, Kioxia (formerly Toshiba Memory) is working on its own NAND flash alternative, Twin BiCs FLASH, which the company describes as the “world’s first three-dimensional (3D) semicircular split-gate flash memory cell structure.” That’s quite the mouthful and certainly sounds intriguing. However, the project is still in research and development and will likely not see the light of day for some time to come.
It’s uncertain at this point what the future looks like for NAND flash alternatives such as those from Intel, Micron, Samsung, and Kioxia. Much will depend on how traditional NAND flash evolves and the affordability of these new devices over the long-term. With workload and data demands increasing, organizations will continue to look for whatever solutions can effectively balance performance and capacity against endurance and cost.
Where does storage class memory fit in?
In the last couple years, storage class memory (SCM) has inspired many headlines, especially with IBM’s recent release of the first Optane DC persistent memory modules (PMMs).The modules plug into standard dual in-line memory module (DIMM) slots, allowing the PMMs to connect directly to the server’s memory space. The Optane DC modules represent a big step forward toward the vision of a new storage tier that sits between traditional dynamic RAM (DRAM) and NAND flash storage to support demanding enterprise workloads.
Intel’s Optane DC modules are typically referred to as a type of phase-change memory (PCM)—“typically” because the company’s messaging has been somewhat mixed around this issue and they are sometimes considered to be a type of resistive RAM. However, the consensus is that the Optane DC modules fit neatly into the PCM category.
Phase-change memory is a type of nonvolatile memory that stores data by rapidly changing a material between amorphous and crystalline states. Phase-change memory offers much faster performance and lower latency than NAND flash and has the potential of delivering greater endurance. On the other hand, PCM is also much more expensive.
But PCM is not the only SCM effort under development. Scientists are actively researching other technologies that they believe can also serve as a bridge between DRAM and flash storage. One of these is resistive RAM (RRAM or ReRAM), another type of nonvolatile memory that promises significantly greater performance than NAND flash, with speeds approaching those of DRAM.
Resistive RAM works by applying different voltage levels to a material in order to switch its resistance from one state to another. Compared to NAND flash, RRAM offers much better performance and higher endurance while consuming less power. In fact, the technology shows so much promise that it has been proposed as a possible replacement for both NAND flash and DRAM.
Another nonvolatile memory technology that shows promise is ferroelectric memory (FRAM or FeRAM), which is built on a ferroelectric capacitor architecture that incorporates a mechanism for controlling polarities. Ferroelectric memory offers high read and write speeds, low power consumption, and high endurance. But in its current form, it has a very low density and its processing costs are high.
Nanotube RAM (NRAM) is another nonvolatile memory technology that’s being actively researched for its DRAM-like performance, low power consumption, and ability to withstand extreme environmental conditions. Nanotube RAM can also retain data far beyond NAND flash capabilities. A NRAM device is made up of tiny carbon nanotubes that are extremely strong and have conductive properties. The nanotubes sit between two electrodes through which voltage is applied to change the resistance, providing the structure for data storage.
Researchers are also focusing on Magnetic RAM (MRAM), which could potentially deliver speeds on par with static RAM (SRAM). Magnetic RAM—also called magnetoresistive RAM—is a nonvolatile memory technology that uses magnetic states to store data bits, rather than using electrical charges like other memory technologies.
Vendors are pursuing different strategies for implementing MRAM. One of the most promising is spin tunnel torque MRAM (STT-MRAM), which leverages the angular momentum in quantum mechanics to store data. The biggest challenge with MRAM, however, is its extremely low density.
All of these memory types—along with others being investigated—are in various stages of research and development. Although several vendors already offer products based on some of these technologies, today’s research is what will drive them into the future and make it possible to create a memory-storage stack in which all memory is nonvolatile, profoundly changing the way we deliver applications and store data.
What does the future hold?
The memory technologies I’ve discussed so far are mostly works in progress, with vendors looking for ways to make them more practical and profitable beyond a handful of small niche use cases. But researchers are also looking further into the future, working on technologies that are still in their infancy or have been around for a while but are now being infused with new efforts.
One area of research that’s caught the industry’s imagination is silica glass, which can be used to store data much like the crystals that taught Superman about his Krypton roots. This idea of silica glass got its boost in 2013 from researchers at the University of Southampton, who demonstrated storing a 300 KB text file in fused glass.
The storage medium, referred to as 5D memory crystal, or 5D storage, relies on superfast femtosecond laser technology, like that used for refractive surgery. The laser etches microscopic nanogratings into the glass to provide the data bit structure. A special technique is then used to retrieve the data, taking advantage of the light’s polarization and intensity.
According to the researchers, a 25-mm silica disk could store as much as 360 TB of data, sustain temperatures up to 190 degrees Celsius, and remain viable for over 13 billion years, making today’s storage media seem like cardboard cutouts. In fact, 5D storage has already received a fair share of notoriety. A silica disk storing Isaac Asimov’s Foundation series now orbits the sun, sitting inside Elon Musk’s cherry red Tesla Roadster, which itself sits onboard the Falcon Heavy SpaceX rocket.
Microsoft was so impressed with the 5D storage technology that it has launched its own initiative, dubbed Project Silica, whose stated goal is to develop the “first-ever storage technology designed and built from the media up, for the cloud.” Project Silica uses femtosecond lasers to write data into quartz glass, the same process used for 5D storage. As its first proof of concept, Microsoft teamed up with Warner Bros. to store and retrieve the entire 1978 Superman movie on a piece of glass about the size of a drink coaster.
Another innovative approach to data storage is racetrack memory, which was first proposed by IBM researchers in 2008. Racetrack memory applies electrical current to nanowires to create domain walls with opposite magnetic regions between them (thus the racetrack concept). The domain walls and their regions provide a structure for efficiently storing data. IBM hopes that racetrack technology might eventually yield a nonvolatile, solid-state storage device that can hold 100 times more data than current technologies at a lower cost-per-GB.
Other researchers are pursuing a different approach to racetrack memory, leveraging the inherent properties in skyrmions, which are microscopic swirls found in certain magnetic materials. Skyrmions work in conjunction with anti-skyrmions to create opposing magnetic swirls that can be used to create a three-dimensional structure for hosting digital data. Skyrmion-based storage requires very little current and has the potential for storing large quantities of data while delivering high-speed performance.
Scientists are also researching the potential of storing data at the molecular level. One of the most publicized approaches is DNA, in which data is encoded directly into the genetic material. Corporate, university, and government researchers are actively pursuing DNA’s potential for persisting data. DNA can store massive amounts of information, is millions of times more efficient than anything we have today, requires almost no maintenance, and can endure for many millennia.
The challenge with DNA storage, however, is that it’s error-prone and expensive to produce. To address these issues, scientist have been experimenting with multiple solutions. For example, researchers at the University of Texas at Austin have come up with error-correcting algorithms that help compensate for the high rate of errors. Using synthetic DNA, they have successfully stored the entire book The Wizard of Oz, translated into Esperanto. But this is nothing compared to DNA’s true potential. As many have claimed, DNA could make it possible to store the entire internet in a shoe box.
Despite the enthusiasm around DNA storage, researchers are also investigating different molecular storage techniques, using molecules that are smaller than DNA and other long-chain polymers. The big advantage here is that smaller molecules can be cheaper and easier to produce, and they have the potential for storing more data. If that’s not small enough, scientists are also researching single-atom data storage, with each bit stored in an individual atom. So far, I’ve come across no discussions about going smaller.
Where do we go from here?
If technologies such as molecular storage and silica glass storage can be manufactured in a way that is both efficient and cheap, we’ll be better prepared to handle all the data that’s expected in the years to come. But we have a long way to go before we get there, and until then, we’ll have to rely on the advancements being made with NAND flash and its alternatives, as well as with SCM. What we’ll do with all that data once we figure out how to store is another matter altogether. In terms of storage, however, the sky is indeed the limit.