This article is part of the Technology Insight series, made possible with funding from Intel.
If you run software on workstations that just can’t get enough DRAM — think rendering, video editing, machine learning, and 3D modeling — then the idea of a fast, persistent memory technology living closer to your CPU is immensely attractive. Doubly so if you can get it in big capacities at a price-per-gigabyte discount compared to DDR4.
“Consider an engineer who’s building an airplane and has a humongous mechanical model that can’t currently fit in memory,” said Frank Hady, fellow and director of Intel’s storage technology group. “He’s waiting for it to be pulled in from an SSD. What if that whole model could fit in memory?”
At a recent briefing in Seoul, South Korea, Intel reviewed plans to arm workstations with support for Optane DC persistent memory modules (DCPMMs). Not only does the t5e4echnology make it possible to keep large projects in memory, but it can also retain those projects when the power goes out.
When Intel introduced its Optane DC persistent memory modules back in 2018, the company aimed for enterprise customers with big capacity needs, serious performance requirements, and real cost concerns. But Intel’s plan was always to push into other segments bottlenecked by storage.We’ll walk you through how this is all going to play out.
- Intel will soon make Optane DC persistent memory available on workstations through second-gen Xeon Scalable CPUs.
- The technology will work in Memory Mode with no software changes needed. Workstation apps see DRAM and DCPMMs as one large pool of memory.
- App Direct Mode makes optimized software aware of DRAM and Optane installed on the same bus. Data stored on the DCPMMs is persistent in this mode.
- Demos of both modes show that Optane DC persistent memory improves performance and functionality in storage-bound workstation apps.
Laying the foundation for persistent memory support
As you know from our coverage of Intel’s Optane technology, enabling persistent memory required a substantial hardware and software effort. The datacenter ecosystem is still taking shape as developers think up new ways to leverage the tech’s potential and optimize for this new storage tier.
The DCPMMs themselves are much more complex than the DDR4 DIMMs they look like. In fact, Intel engineers describe the DCPMM as a complete computer on a module. Power, thermals, error-checking, and encryption are all handled by a sophisticated controller nestled between the Optane media chips.
That piece of silicon is also responsible for talking to Intel’s second-generation Xeon Scalable CPUs through a new protocol called DDR-T, which accommodates out-of-order transactions to the module without interrupting DDR4 traffic on the bus. It’s a critical building block for adding new functionality and preserving interoperability. Of course, the Xeons must be able to “speak” DDR-T, too.
Why does the technical background matter? With so many platform components in play, Intel is in a unique position to make Optane viable. No other storage vendor has insight into the CPU’s memory controller, the motherboard BIOS, or the software tools used to optimize for a persistent memory programming model. It also explains why you can’t drop a DCPMM into just any DDR4 slot.
Right now, those second-gen Xeon Scalable processors are the only ones with Optane DC persistent memory support, so expect the first DCPMM-equipped workstations in single- and dual-socket configurations with up to 3TB of persistent memory per CPU and 6TB per PC. Since Cascade Lake-based Core i9 models feature the same memory controller, we suspect they might also get an official blessing from Intel.
My workstation has persistent memory. Now what?
Optane DC persistent memory is accessed the same way in a workstation and server: through Memory Mode, App Direct Mode, or the hybrid Dual Mode.
With DCPMMs configured to operate in Memory Mode, applications see DRAM and Optane DC persistent memory as one big pool of volatile memory. Once your workstation shuts down, all the data in that pool is lost. No software modifications are necessary to use Optane in Memory Mode, though. DCPMMs are installed alongside your workstation’s DDR4 modules to expand capacity at a lower price per gigabyte than DRAM.
Intel’s Xeon CPUs know the difference between both memory technologies and tiers them accordingly. DRAM is treated as a cache for frequently-accessed data. The processor checks there first. If the information isn’t found, it’s read from Optane DC persistent memory at slightly higher latency. But if you have a multi-terabyte project that would have previously spilled over onto an SSD, keeping it on the memory bus, closer to your CPU, improves performance dramatically.
In fact, we’ve already seen demos that quantify the benefit of going big on Optane DC persistent memory. ANSYS showed off its Mechanical structural analysis solver running on one system with 192GB of DDR4 memory and 1.5TB of DCPMMs in Memory Mode, comparing it to a similar setup with just the 192GB of DRAM. Fitting an entire test model into storage on the memory bus more than doubled performance.
Of course, as developers envision new ways to exploit Intel’s Optane technology, they can optimize for App Direct Mode, which makes the operating system and applications aware of both memory types on the same bus. Files that need the lowest latency can be directed to DRAM, while larger data structures requiring persistence are put into the DCPMMs.
HP’s engineers took it upon themselves to rewrite portions of Blender, an open source 3D content creation tool, for this second operating mode. In one demonstration, HP showed how loading a project stored in persistent memory was four times faster than fetching it from an SSD. Another tweak moved the Undo/Redo buffer to persistent memory, maintaining it even after closing and restarting Blender. As a result, every edit could be committed immediately, without the need for manual or automatic save points.
Fortunately, you don’t have to choose between Memory Mode and App Direct Mode. On workstations running a mix of applications, it’s possible to partition Optane DC persistent memory for the best of both worlds.
We’re almost there
According to Intel, Optane DC persistent memory is already supported across multiple Linux distributions, a couple of Windows Server versions, and VMware’s virtualization software. Windows 10 Pro for Workstations, the more relevant operating system, also supports DCPMMs in Memory Mode, App Direct Mode, and the mixed Dual Mode.
It’s no secret that Microsoft is still fine-tuning its implementation. When one of the benchmarks on display at Memory/Storage Day in South Korea finished with an error, Intel acknowledged the ongoing work to polish integration, which is why the union’s official blessing is still forthcoming.
If you’d rather not wait, there are plenty of workstation-class use cases native to Linux where first-generation DCPMMs are ready to rock and roll. After all, Optane DC persistent memory debuted alongside Linux in the enterprise space.
Intel says its technology will eventually find its way down to desktop PCs. As a teaser, it showed off a modified version of Doom able to load a previously-saved campaign from persistent memory almost two times faster than legacy storage.
Until then, Optane DC persistent memory is decidedly business-oriented, showing the most promise in situations where performance saves time, and where time is money. The workstation world is full of those types of applications.
Optane DC persistent memory is poised to have a transformative impact on the storage hierarchy, and industry experts are anticipating rapid and significant growth. In a 2019 Flash Memory Summit presentation, Jim Handy of Objective Analysis predicted revenues of 3D XPoint—the non-volatile memory technology that Intel branded as Optane—in excess of $3.5 billion through 2023.
When Intel signals the readiness of its DCPMMs for workstations, you’ll want to be ready. Explore the benefits of Memory Mode first, and then the App Direct Mode-optimized software sure to follow.