Skip navigation

Microsoft's Jeffrey Snover Discusses Windows Server 2012

Microsoft's lead architect for the Windows Server Division on the latest server operating system

Jeffrey Snover is lead architect for the Windows Server Division at Microsoft. He has been in the IT industry for over 30 years and with Microsoft since 1999, and he invented Windows PowerShell. Windows IT Pro technical directors Sean Deuby and Michael Otey sat down with Snover at the Windows Connections conference in early November to discuss some of the latest enhancements to Windows Server.

Jeffrey Snover, Sean Deuby, and Michael Otey
Jeffrey Snover, Sean Deuby, and Michael Otey

 

Sean Deuby: What does "lead architect for Windows Server" mean? What does that role look like on a day-to-day basis?

Jeffrey Snover: It's a couple of things. First, we have people who are very specific to a feature -- like the architect for that feature. My role is to look across the scenarios to make sure they're all fitting together. And then in general, the job of an architect is to be what I call the "guardian of the long-term," which is to say, "Hey, that's great, but how is that going to fit in the next release and the release after that?" So that's what I do for Windows Server -- figure out not just how to deliver a great product in this release, but also use this release to set up the next one and the one after that.

Michael Otey: Sean and I attended the Windows Server 8 Reviewer's Workshop in September, and we were blown away by all the new features that are coming up in Server 8. I know there are way too many to talk about, but what are some of the highlights for you of what we're going to see?

Snover: As a technologist, I look at the technology innovations. Although we've had great technology innovations in the past, I think this is by far the largest, most transformative release we've ever had. We have major innovations in storage. Honestly, in the past, we've had some weakness in our storage stack and people had to buy very expensive, very high-end storage arrays to do some of the things they wanted to do. Now if you have those things you're going to get more value out of them because we have a close partnership with those storage vendors. A lot of the things you think you could only get from the storage arrays, you're going to get with in-box storage. If you sit down and look at the details, in every single layer of the storage stack there's transformation -- the way we deal with disks, the way we deal with the file system, the way we cluster things together.

Otey: Some of the things that really jumped out at me from the storage side were the built-in data deduplication capabilities, which are pretty amazing; the total revamp of the Checkdisk operations, which are much more efficient and online and dynamic; and the ability to take advantage of the storage back-end arrays, where you wouldn't have to funnel the I/O through the servers and instead when you're doing a file-copy operation on that back-end array you can tell it to take advantage of those kinds of things that are built in. Those are big changes, and they're obviously baked into the hardware at a pretty deep level and into the OS.

Snover: Right, and in the whole storage space -- the ability to take a bunch of inexpensive disks and pool them together and do thin provisioning and . . . this delta. In the past, NTFS was designed around SCSI. But it turns out, some of the inexpensive SATA drives didn't correctly implement a number of the commands. They looked good on a benchmark, but not so good when there's a power outage and now we don't have [any data]. Now we detect that and modify our flushing algorithms to ensure consistency and get great reliability, from notebook drives all the way up to very large sets of SATA drives -- so now you can more safely take advantage of commodity components.

Deuby: That raises an interesting point. A lot of companies are still upgrading to Server 2008 R2, even though Windows Server 8 will be out soon. There's a big push toward private cloud, and a lot of people are wondering how to manage the private cloud. Storage is one of the reasons you should migrate to Server 8 rather than stick with Server 2008 R2 as you're building your private cloud and your next-generation infrastructure.

Otey: Another thing we were really impressed with is some of the changes in the new hypervisor and virtualization of Server 8. Can you tell us about those?

Snover: First is scale, scale, scale. That's not just limited to virtualization. There's been a strong push from the very beginning on scale -- finding out, throughout the stack, where the bottlenecks are and fixing them. So now we go up to 640 CPUs. It's phenomenal. When you have a virtualized machine, you can now have 160 processors and 2TB of RAM, and then the VMs themselves, 32-processor VMs and 512GB of RAM.

 

 

 

Deuby: This is version 3 of the hypervisor; can you tell us about v3?

Snover: In the vast majority of my 31 years in the industry, I worked for companies that competed against Microsoft. Microsoft would enter a market and would quickly point out the competition's failings and flaws. We'd pat ourselves on the back and feel very confident -- but honestly, we always knew: Sell your stock options before Microsoft's version 3 hits the ground, because by version 3 they've dialed it in and figured it out. They do a great job with version 3. And that's certainly the case with hypervisor and virtualization. It's not just about scale, but also the ability to do replication -- definitely the best replication story out there, achieving what some people call the Holy Grail of replication, where not only can I do replicas based on a cluster, I can do it without a cluster using shared storage, or just having an Ethernet cable.

 

And you have the ability to do that replication synchronously or asynchronously and the capability for disaster recovery scenarios, where you can say, "Hey, I'm running this here and asynchronously replicating it, perhaps in the cloud to a hoster in case anything goes wrong here." Machines go down -- but sometimes entire sites go down, so the ability to inexpensively back up to the cloud is a wonderful thing.

Deuby: I think you bring up a key point, which is the inexpensive part of it. So, small-to-midsized businesses (SMBs) don't have to pay an arm and a leg.

Otey: That's true. VMware has been criticized for being an expensive solution, and it seems like it becomes more expensive all the time. If Hyper-V is built in to Server 8, it's a good value proposition for SMBs.

Snover: This is Microsoft's history and our distinct competence: the ability to take very high-end, very expensive computing and make it available to the masses. You see that with virtualization: very high-end and increasingly expensive. I think [VMware] became aware that v3 is coming out, so it's jacking up prices to get the money while it can. You see that with virtualization, you see it with storage, you see it with management.

Another example is remote direct memory access [RDMA]. It lets me say I've got a specialized NIC that allows me to have an alternate network path to TCP. It's amazingly fast, amazingly low-latency because it's all done in the hardware. In the past, that was really done by the high-performance computing world. So, x thousand guys pay through the nose to get these great NICs, get fantastic performance, but now what we're saying is that everybody should be delivering NICs like that because in addition to those scenarios, which continue to exist, we now have a kernel-mode API that can access that and we take advantage of that kernel-mode API with SMB. So now SMB Direct gives the ability to use this RDMA and go as fast as the wind.

Microsoft's engineers changed the protocol so we can use multiple TCP connections on the same SMB session. This means you can have as many NICs as you want, connected between source and destination. And the session is dynamic, so you can remove a NIC and dynamically adjust. You can add a NIC and dynamically adjust. So you have maximum bandwidth and maximum resiliency. This was the most impressive thing, in my opinion -- the fast failover.

In the past, if you clustered your file server, we've raised the bar from high availability to continuous availability. High availability says that if something goes wrong, you can fail over and restart your operation and succeed. Continuous availability says that if a failure occurs, we detect it and resolve it quickly enough that the application never notices it. The operation takes a little bit longer, but it doesn't time out -- which is a lot of hard work.

Otey: One of the other points you touched on is that Server 8 now has built-in NIC teaming, so you don't need specialized vendor NICs to get this kind of availability.

Snover: You can say, "I got that from my vendor in the past." Well, yes and no. You could, but it only worked with that vendor's NIC. You couldn't have heterogeneous NICs. If you ever had a problem and you called Microsoft and said, "I'm using NIC teaming," Microsoft would say, "OK, turn that off -- that might be the issue." But now we support it, so if you call, we'll help you through it. But 32 NICs -- it's just phenomenal. The performance team did such a good job paying attention to the NUMA algorithm's uniformed architecture.

Otey: And that's especially important for performance in VMs.

Snover: Yes, because you can't buy a server today that's not NUMA capable. So according to NUMA, there are things that are cheap and there are things that are expensive to do, and the software has to be aware of that and pay attention to it -- otherwise, you have bad performance. So we've gone through the entire stack looking for these problems. The receive-side scaling, which is to say, I've got a lot of bandwidth coming in and it all goes to the same processor. But you can only go so big -- so you want to fan it out in a way that's aware of the NUMA topology. So you're not just saying, "I fan it out to all these nodes, but they're all in the same NUMA node" -- because then that doesn't fan out. So it's just fantastic scaling.

 

 

 

Otey: So, that can give you linear scaling as you're moving up, as far as processing and cores and that type of thing.

Snover: Did you see those numbers? I had to go back and say, "You need to check your numbers, because I've never heard of this." The scalability of VMs and basically -- it's 8 to 16 -- for a SQL Server workload. You went from an 8-CPU VM to a 16-CPU. I think it's 1.7x scaling, which is just phenomenal. You'd expect 1.4, 1.5, and let me shake your hand. But here's one, I didn't think it was possible, but from 16 to 32, it was 1.9 -- those are crazy numbers.

And it's not just a bolt-on. It requires work at every layer of the stack -- and in the management space, the same thing. We have this new multi-machine Server Manager, but in fact that's just a very thin layer on top of the multi-machine management capabilities within the OS. And it changes at the protocol level, at the PowerShell level. They had to make changes to WMI. At each layer, we had to make changes to be able to support that.

Otey: You touched on something that's going to be super important with Server 8, which is the changing management paradigm. With Server 8, you've taken a different look at how admins should manage Windows Server.

Snover: Absolutely. In the past, you bought a server, and a full server was the default. You got a GUI with it, and there was Server Core and a few specialized people would use Server Core, but there were a lot of issues with it. So with each release we've invested in Server Core, made it better and better, made it able to support more roles, be able to do more manageability.

With Server 8, we're now confident enough to say that Server Core is the preferred management deployment role. Full Windows Server is still there as a compatibility mode, but by and large we want everybody to use Server Core, which is basically to say "headless server."

We still support GUIs -- we're not walking away from GUIs. GUIs are what make the company great. GUIs help customers, but those GUIs should run on the client, and the client consumes as much CPU and as much memory as you want -- it's a client. Obviously you don't want that on a server. And then layer that GUI on top of PowerShell, remote PowerShell, so that it can do multi-machine management, and anything I can do from the GUI, I can then automate.

Otey: So you're saying out of the box, Server Core is going to be the default installation option. But in the past, it was difficult to switch back and forth between Server Core and the full installation. Has that changed?

Snover: Yes. Why are we confident? Basically, three answers: In the past, you chose Server Core or Windows Server -- and if you made a mistake, you started over. Now you can go from Server Core to full Server and back again. And there's something in between, which is to say that with full Server, you can take off the Metro shell in IE. So you can still run GUIs, you can still run Server Manager, but you launch it from the command line. This gives you many of the benefits of Server Core in terms of reduced footprint and reduced serviceability, which means it takes fewer patches. For those people who've been able to make the full transition -- maybe the admin hasn't been able to do that, they're not fully cognizant of PowerShell, or remote management, or an application. Often what we've found in our compatibility tests is that an application will require the GUI for installation but not operation.

 

I mentioned three things -- that was one. Now it's safer; reduce the risk. The second is manageability. In the past, PowerShell 1.0 shipped with about 130 cmdlets; PowerShell 2.0 shipped with 230 cmdlets; and now we ship with more than 2,300 cmdlets -- so over a factor of 100 more cmdlets.

And now, you can really do full management of the box locally. And if you want to, we now have remote management. In the past, Server Manager couldn't remotely install a role. But now you can. You still use the GUI, but you do it remotely.

The third thing is role availability. There were certain roles that required full Server. Now, more and more of those roles require Server Core. And more importantly, the Denali release of SQL Server [SQL Server 2012] runs on Server Core. So we're feeling pretty confident. This is certainly one of the strong messages we have for everyone in the community, for the ISVs: Love the GUI, just don't run it on the server. Run it on the client, and use PowerShell remoting to the server.

Deuby: Much of what Server 8 is focused on is helping customers build their own private cloud -- and certainly it will be used as a major component of the public cloud as well. Are there any enhancements that have been made to identity to help with the integration because we're looking at building private cloud now and going to something that's hybrid? So the ability to have some portability between the two certainly has something to do with Active Directory Federation Services (AD FS). Has anything been done in that area?

 

 

 

Snover: The big investment there is in the area of roles-based administration. A lot of this isn't just the technology itself. I recently gave a talk in Japan, during which I was trying to explain our investment in continuous availability. I said that basically we're trying to treat three things: more 9s, more 9s per dollar, and easier 9s. Which is to say that you can have all this capability, but it requires regular people to be able to put it together. If you require the world's best admins -- at their best, fully trained -- then you're not going to get that continuous availability. It's the same sort of thing in the area of identity and access control. We have these great capabilities, but sometimes they can get like, "What? Do I understand this?" One of the big things we've done in Server 8 is take the roles-based access and . . . .

Deuby: You're talking about flexible access control.

Snover: Exactly. The ability to say, "Hey, these roles -- I want to automatically identify these files with a set of attributes, high business impact, who owns it, et cetera," and then say, "Hey, you can have read access if you belong to this department." So it's not just identity and group but a richer set of access rights.

Otey: Windows 8 and Server 8 share the same kernel, right?

Snover: Yes, always. And that's why -- same kernel, same GUI, there's one Windows. It takes on different flavors, but there's one Windows. Some people ask, "Why does Windows Server adopt the Metro UI?" I don't understand the question. There's only one Windows. So if Windows has a new UI, Windows has a new UI. And I get why you might not want that on your server, which is why we have Server Core. But it's actually quite a great UI on the client desktop, especially if you have Touch. And they've done some great stuff, it's just that you wouldn't want that consuming resources of your server. One thing I try to point out is that if you have SQL Server, you want every single transistor delivering transactions, because that's its job. When it comes to GUIs, guess what? We want to deliver. I'll consume all available CPU cycles, memory, to deliver the experience because that's all about you.

Otey: A lot of administrators aren't familiar with PowerShell. They've struggled to get into it; they kind of know Windows Shell scripting, maybe they know VB Script, but PowerShell is a bit daunting to them. What kind of advice can you give those admins to help them adopt PowerShell, to help them move toward it and take advantage of it?

Snover: PowerShell is the glue coat. We've glued things together. So we deal with the world as it is. It's a messy world. Ultimately, we're trying to drive to these cmdlets -- these high-level task-oriented abstractions that allow people to think about what they want, type it, and get it. Ultimately, perfection would be do/myjob/ordermeapizza. Obviously we're not going to get there. But if you think about what you want, and you can type it and get it, PowerShell is very easy. The fact that you have to type it isn't a big issue. It's a different input device, but that's not the issue. You think about something; you type it and get it. In the past, you had to do some COM programming or find some WMI classes, or invoke some command-line shell and parse the output -- which can get pretty rough. Some people just love that stuff and are very successful at it. But ultimately a lot of people just want to type it and get it. So this is where the 130, 230, 2,300 cmdlets come in.

Deuby: I know there are features -- for example, in Active Directory (AD) I think it's called a cmdlet window -- where you can see commands going past as you're manipulating users and things like that. You can see what they're doing, kind of giving you feedback to it. As we're moving to various degrees of comfort toward a cloud computing era, one of the key tenets is automation. And if you're going to have any kind of an automation that involves Microsoft products -- for the IT pros out there, you have to learn PowerShell.

Snover: We're working on something -- which will ship before Windows 8, and ship out of box -- that's a script-sharing facility, which allows people to say, "Hey, I'm interested in something around virtualization or Exchange," and it has this search aggregation service that looks at all these various script repositories, aggregates them, and delivers you the results, along with the reputation service. So you can say, "That looks good," and then you can copy the script and make it your own. It's very clever technology. In fact, you can configure it so that you can set up a local script-sharing repository as well. So a big company can say, "Point this thing at my script repository -- I want you using ours, not the community's."

 

There are some big challenges for the IT pro community. The cloud is wonderful, but it requires changes on the part of IT pros. A bunch of people are going to prosper through this change; others will not. It's all about automation. I like to joke, but it's true: Bill Gates will pay any of us a million dollars a year if we can produce enough value. The trick is, how do you produce enough value? If you just take a GUI and a mouse and go at it, it's hard to differentiate a really smart guy from a not-so-smart guy with a mouse. Can you click faster? Automation is basically a skill amplifier. You're able to be much more productive. That's the test of a technology -- to put it in front of a junior person and in front of an expert person and see how much more of a delta you can be. PowerShell acts like a big amplifier. People who are skilled can produce a lot more -- a ton more productivity.

Sean writes about cloud identity, Microsoft hybrid identity, and whatever else he finds interesting at his blog on Enterprise Identity and on Twitter at @shorinsean.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish