In my last blog I told you about a Networking Overview session that took place at Ignite. In this blog, I’d like to focus on the Platform Vision & Strategy (4 of 7): Storage Overview session.
From the outset it was a study in contrasts; what you couldn’t do yesterday that you can do today with Software-Defined Storage (SDS). It also provided information on where the SDS journey will take you in the future and, of course, there was a hefty dose of demos to bring those words to life. You can view the session in its entirety here, but in case you don’t get the opportunity, I’ll run down the salient points.
This session focused primarily on one of Microsoft’s storage options: the private cloud with Microsoft SDS (Microsoft Azure Stack). To understand more about this option, we first need to understand exactly what SDS is. Microsoft’s Principle Group PM Manager, Siddhartha Roy defined it as how we put software intelligence that delivers feature-rich cloud scale storage and economics on industry-standard volume hardware. Depending on its interpretation, SDS can mean different things to different people.
Microsoft’s idea of SDS is primary application data storage on cost-effective continuously available, high-performance SMB3 file shares backed by tiered storage spaces. And it’s delivering that in Windows Server 2012 R2 and System Center 2012 R2. The solution, which is focused primarily on enabling Hyper-V deployments, comprises a shared JBOD at the bottom of the stack connected to a couple of file servers; that constitutes the storage backend. A host of different consumers of that storage are connected to that backend through a SMB3 storage network fabric. Across the stack, end-to-end, runs System Center, which provides unified storage management.
When it comes to a file-based storage solution, one might wonder: will it have the performance and scalability I need? According to Jose Barreto, Principal Program Manager at Microsoft, the answer is yes. In fact, he says that, “Microsoft has proven over the last 3 to 4 years that this is something available to you today through a variety of different hardware solutions.”
A key piece to enabling continuous availability of the Microsoft solution is the Scale-Out File Server, whereby you have a set of machines that work together to create a cluster, and can access that storage through any of the nodes to give you a seamless scale-out capability. So, if you have 4 servers each with 10 GB of bandwidth, you can effectively achieve 40 GB of bandwidth if everything else can keep up. And, if you lose a node, the other 3 that were working will keep everything running without disruption in Hyper-V.
Storage Spaces is another key component that lives on these file servers. In the past, if you had a bunch of disks that you wanted to connect to a Windows Server machine, you had no way to transform say, 100 or 200 disks, into resilient storage you could rely on, even if some individual disks failed. With Storage Spaces, you can do just that. You can connect 100 or 200 disks to a Windows Server cluster, putting all of those disks in a pool. You can then carve out of that pool to give you something like two-way mirror or three-way mirror parity spaces. This gives you resiliency to individual disk failures and even enclosure failures. It also increases storage performance, because you can combine the performance of the multiple disks together. Moreover, it reduces cost since you’re basically connecting the bare components of the storage solution to your Windows Server.
A capability known as tiering makes the Microsoft SDS solution highly efficient. Tiering essentially figures out which data is hot and which is not, and moves it to the right tier to give you the best performance. You use it by creating a storage space that combines SSDs in one tier and traditional spinning disks in another. It can also be used with deduplication.
All of these capabilities are managed with System Center. One of the things you can do with System Center is provision virtualized storage using Virtual Machine Manager (VMM). The VMM can bare-metal provision a Hyper-V host—you might already be aware of that capability. But, it can also now bare-metal provision a File Server and build a File Server cluster out of that. It can even help with giving the right permissions to share.
All of these pieces of the puzzle, so to speak, come together in one integrated solution for hardware and software called Dell Storage with Microsoft Storage Spaces. The solution is a 4-node file server cluster connected to 4 JBODS, each with 60 disks, HDD and SDDs, and it’s managed by System Center.
Microsoft partnered with Dell to offer different configuration options. These options, Pre-Defined SKUs, are based on 13th generation Dell enterprise servers and 12 SAS JBODs. They are available in 2x2 (e.g., 2 file servers by 2 enclosures), 2x3, 2x4, 3x3, and 4x4 SOFS configurations. Other Microsoft partners (e.g., DataOn, HP and RAID Inc.) deliver JBOD solutions for Storage Spaces.
In addition to the Windows Server 2012 R2 story discussed in the session—keep in mind, that EVERYTHING described up to this point is actually available today with WS 2012 R2—Microsoft speakers gave a preview of what to expect in Windows Server 2016 and the next version of Microsoft Azure Stack. For each, coming features will focus on reliability, scalability, manageability, and reduced cost.
Windows Server 2016 Technical Preview 2 includes: Storage Quality of Service (QoS) via Policy Manager, a feature that allows storage performance of VMs to be controlled and monitored; Rolling Upgrades to allow customers to upgrade to the latest version of Windows Server without having to do a very disruptive and forklift upgrade; and Storage Replica which will enable the protection of key data and workloads from transient failures. For more information on Windows Server 2016 Technical Preview 2, check out:
The session was loaded with information and I would suggest viewing it in its entirety at https://channel9.msdn.com/Events/Ignite/2015/BRK2485. And, don’t forget to check back for future IT Innovators blogs on the topic of storage.
This blog about storage and networking is sponsored by Microsoft.
Cheryl J. Ajluni is a freelance writer and editor based in California. She is the former Editor-in-Chief of Wireless Systems Design and served as the EDA/Advanced Technology editor for Electronic Design for over 10 years. She is also a published book author and patented engineer. Her work regularly appears in print and online publications. Contact her at [email protected] with your comments or story ideas.