Looking for software-defined storage to reduce cost and simplify provisioning? You might be considering the system Microsoft uses to power Azure storage, which is built into Windows Server using failover clusters and crammed with features from fault tolerance, with erasure coding for redundancy, to real-time data tiering for performance. Storage Spaces Direct (S2D) delivers all that at a fraction of the cost of a traditional SAN and with higher performance, by pooling together the hard drives inside industry-standard servers over SMB3 to deliver highly-available, highly scalable software-defined storage. Need more storage? Add more drives, more servers or both. Want lower latency? Use NVMe disks alongside SAS or SATA drives for capacity.
It’s all managed by familiar Windows Server tools like System Center, and PowerShell, so S2D storage can be part of cluster shared volumes and scale-out file servers. And it’s included in the Windows Server Datacenter Edition license. If that sounds like a good fit for you, you’re not alone; Microsoft telemetry showed more than 10,000 clusters worldwide running S2D in March 2018 (not counting internal Microsoft servers or short-term test and demo clusters). That number was up 50 percent just six months later (September 2018).
But with software-defined solutions, hardware matters more than ever to deliver the reliability, stability, performance and cost efficiency. Yes, you can use 10Gb Ethernet for S2D, but for performance, you’ll want to use RDMA, and you’ll need to choose between iWARP and RoCE. The NICs you pick (and the firmware and drivers for them) have to support the software-defined networking options to create the S2D storage mesh between servers. You need servers with a TPM.
You need to avoid consumer-grade SSDs with volatile cache, or you’ll get significant drop off in write performance after a couple of minutes usage; the write endurance will be low too – likely as low as 100GB a day over five years. You need to get the right balance of SSDs and high capacity hard drives to tune the system, and you need to pick drives with the right endurance for the persistent read and write cache that gives S2D its storage performance; S2D can be optimized for capacity for archiving, performance optimized for the lowest latency, or balanced between performance and capacity for general use. And then you need to build, configure and test the system.
Why should every customer setting up S2D have to go through the same integration challenges to make sure they get that performance and reliability? Instead, you can pick a validated system with Windows Server Software Defined (WSSD), with specifications certified by Microsoft, pre-configured and pre-tested from Dell EMC.
With Dell EMC Ready Solutions for Microsoft WSSD, you get a solution that has the right storage and network hardware so neither becomes a bottleneck, balanced for the type of workload you need to run, with approved BIOS and firmware versions, with pre-installed drivers. WSSD can use your own Windows Server licences; however, the system can arrive with the storage and network configuration already done, set up with System Center, and soak-tested as an integrated system. That includes tests like hard drive and NIC failures and pulling out the power in the middle of a backup or VM migration.
Picking Ready Solutions for Microsoft WSSD rather than building your own S2D system piecemeal also dramatically simplifies support because you’ll never be caught between storage, networking, compute, and software vendors all suggesting your problem lies with someone else’s component; you get a single support route for the software and hardware – the entire solution - through Dell EMC.
Validated WSSD systems like Dell EMC Microsoft Storage Spaces Direct Ready Nodes based on PowerEdge R740xd servers using Intel Xeon Scalable Silver/Gold/Platinum CPUs make S2D simple, but you still get the flexibility of the platform. You can choose a two-node system for a remote or branch office (with the file share witness in the cloud or even on a USB key with Windows Server 2019 if you’re deploying in a location with no internet connectivity) or scale up to 16 nodes per cluster. You can choose PCIe NVMe drives alongside SAS or SATA SSDs and hard drives for hybrid or all-flash storage, and 10 or 25GbE networking. And as your needs grow, you can add more nodes easily.
WSSD will also be the ideal way to get the improvements to S2D in Windows Server 2019. That quadruples the storage supported in S2D with 4PB pools rather and 400TB per server, with twice the number of volumes (64 instead of 32) which can each be 64TB rather than 2TB and adds support for the latest persistent memory like Intel Optane NVDIMMs. Mirror-accelerated parity volumes already give fast write performance by deferring the compute-intensive parity calculation by writing first to the faster mirror; you can choose the ratio of mirror and parity to balance the performance or mirror drives and the capacity of parity. In Windows Server 2019, the IOPS of mirror-accelerated parity doubles, bringing it much closer to three-way mirroring.
And it’s all managed through the new Windows Admin Center so you can understand storage performance more clearly, including a new outlier detection feature that marks drives with abnormal latency so you see immediately if there's a hardware issue bringing down throughput. This is a dramatically better way to manage Storage Spaces Direct, and with WSSD it’s faster and more convenient to get the advantages of software-defined storage without all the fuss.
Sponsored by Dell EMC