One of the most enjoyable aspects of providing trusted technology solutions is meeting with customers in frank and far-ranging conversations, including what’s next in data centers. I also make several trips a year to talk to the venture capitalist community to help identify emerging memory trends and breakthroughs in shared accelerated storage. My vantage point allows me to offer some trends and developments we might see in 2019.

1. Storage interfaces will consolidate

For several technology generations, storage interfaces like SATA, SAS and PCIe (AHCI) have all co-existed in the data center. That’s changing. Now and in 2019 and beyond, SAS presence is dwindling, SATA is no longer part of many innovation roadmaps. The Enterprise and Datacenter SSD Form Factor (EDSFF) Working Group of industry promoters and contributors is releasing their shortlist, focused on standardizing on high capacity, hot pluggable, 1U vertical SSDs, and scalability to PCIe Gen5 speeds (spoiler, there’s a 1U short and a 1U long), causing some form factor proliferation in the near term.

As the improved TCO of quad level cell (QLC) 3D NAND technology augments TLC for certain read-intensive use cases, enabling flash to compete more directly with 10K and 7200 RPM HDDs, the last bastions of 50-year-old, rotating media.

We’re ready for this. Distributed storage and distributed applications are more popular than ever, broadening and accelerating the movement to flash (in both local/public and private cloud). As flash media adoption grew, flash adapted to fit in opportunistic uses. With NVMe™ we have a storage protocol designed for flash from the ground up. NVMe flash options, with blazing fast and highly scalable storage, closely connects the CPU to storage subsystems.

What’s next: Flash-storage interfaces will consolidate. Fabric-based storage will become standard. NVMe flash will retire legacy interfaces from broad adoption to the data center periphery, but not to the network endpoints where thin, fast and low-power innovations will cement their leadership. When legacy interfaces themselves get in the way of the benefits of fast, flash storage, even hold-outs will agree they’re not worth keeping in the ecosystem.

2. Workload-based storage device tuning will broaden.

Just as applications have evolved to shared infrastructure, flash storage has evolved from one size fits most to factory designed, factory tuned devices that are built to simplify deployment. Data center flash and its use are very different now from what they were not so long ago. As our industry (application architects, deployment designers along with SSD and NAND manufacturers) learned more about how different applications and workload interact with storage, we adopted new flash technologies and integrated them into new SSD designs matched to those workloads.

SSD suppliers build workload-tuned SSDs, right from the factory. Write-centric, mixed use and read-centric SSDs matched how historical and emerging applications used storage. That’s going to broaden.

What’s next: Flash will unlock the pent-up value from data that’s locked away by legacy HDDs. New flash types like our recently announced Quad Level Cell (four bits per cell) are designed to deliver value from read-mostly applications and workloads. Applications like analytics, AI, deep learning, machine learning (and a host of others) excel when we feed as fast as possible. Their data changes little. How quickly we feed their processing engines matters.

3. 5G adoptions will drive new ways of doing edge compute.

High bandwidth, reliable broad geo networks have already helped change how we think of mobile communications. Broad range networks (cellular) are “just like being there,” accelerating possibilities in mobile communications. With the 5G global roll-out in 2020, new technology innovations are required to deliver scalability, capability, security and efficiency — all built on Micron technology. We’ve been working with ecosystem partners and customers to ensure all data is securely connected, from the edge to the cloud.

What’s next: Multiple industries are preparing for the 5G networking revolution: 100x faster downloads than 4G/LTE, a 2 gigabit per second rates, data streams from 20 billion connected devices, enabling 1 millisecond latency. Fast, vast storage will be critical. Remote workers can travel with a productivity levels that are just like being in an office. 5G’s bandwidth and low latency will blur the distinction between available and remote.

Data differently 1

4. The rise of the AI Server and AI-enabling endpoints.

The past several years have been game-changers in artificial intelligence (AI) and machine learning (ML). Deep learning, where the AI systems use multi-layered neural networks instead of traditional statistical machine learning algorithms, has enabled a huge jump to next-generation performance. So, while AI itself isn’t new for 2019, enterprise-ready AI is claiming tremendous resources: memory, storage and compute. And everybody is getting into the AI business, it seems.

AI Model Generation: A complete AI model (for example, a model of the English language) can easily run on phones or tables. Creating those models takes much more. To build those models, development (processing) engines need to house immense data sets in memory and access them very quickly.

Endpoints: Endpoint devices gather data to feed AI models. For proof, see how self-driving cars have radically increased their memory, storage and compute capability as they’ve progressed in layers of AI.

What’s next: AI servers will be their own category. They require 6x the amount of DRAM memory and 2.6x the capacity of SSDs compared to a standard cloud/data center server: memory and storage needs to enable fast data access and fast data processing. AI servers will be about 10 percent of the cloud infrastructure by 2021 – growing to 50 percent by 2025.

Data differently 2

5. 3D XPoint™ hits its stride.

3D XPoint memory – the breakthrough non-volatile, extremely fast storage technology that sits between volatile DRAM and non-volatile NAND – fills an emerging cost/benefit gap between these two building blocks. 3D XPoint is persistent memory, not as fast as DRAM but substantially faster than NAND. Unlike DRAM, it retains its data without power. This enables a new tier of memory and storage, enabling our customers to reimagine their entire memory and storage stack.

The initial product offerings are NVMe, since normal hard drive controllers such as SATA are too slow and inhibit the performance benefits. 3D XPoint memory doesn’t depend on transistors or electron flow, so memory based on 3D XPoint will last far longer than flash memory.

What’s next: 3D XPoint will hit its stride in 2019, moving into broader deployments, expanding the addressable memory to give you a bigger payoff than simply going to the next faster processor alone. Expect to see 3D XPoint storage for specialized data center use cases, such as high-speed stock trading, where time delays must be kept infinitesimal. And of course, for big data and artificial intelligence, both in AI servers for fast machine learning and ingest, and in the smart systems that need fast compute no matter where they are in the network. More details to come (follow my blog to read about them later).