In early 2003, I had what I still consider to be my greatest meeting ever, with two key figures from the development of Windows NT, Mark Lucovsky and David Thompson. Lucovsky, of course, was one of NT’s key architects, and he joined Microsoft along with Dave Cutler and the original wave of ex-Digital employees. Thompson led Microsoft’s LAN Manager project before joining the NT team and taking over the development of NT's networking subsystem. This is their story.
Note: This week I’m taking my first actual vacation in over 10 years. Each day while I’m gone, I’ll be revisiting classic SuperSite articles from the past with additional commentary and imagery. This one is special and will be quite long, but I think worth it. Here, you’ll find the print magazine articles and links to the original SuperSite articles that came out of these meetings. But stick with it, I’ve got some neat additional material here as well.
For example: These articles were fact checked at the time by Mark Lucovsky and Dave Thompson!
Here’s how it all began
During a recent trip to Microsoft's Redmond campus with Janet Robbins and Mike Otey, we had the chance the sit down and chat with two of the most notable figures in the history of Windows, Mark Lucovsky and David Thompson. For those of you not familiar with the early days of Windows NT, known then simply as NT, both Lucovsky and Thompson played key roles in the development of this important software project. Mark Lucovsky, Distinguished Engineer and Windows Server Architect at Microsoft, joined the company with the original wave of ex-Digital Equipment Corporation (DEC) employees that accompanied NT architect Dave Cutler. Known primarily for his unusual ability to grok how the thousands of components in NT work together, Lucovsky is widely hailed for his technical acumen and his early efforts to change NT from an OS/2-based system to one that ran 32-bit Windows applications. David Thompson, Vice President of the Windows Server Product Group, joined Microsoft in 1990 and led an advanced development group in the company's LAN Manager project before joining the NT team later that year. There, Thompson guided the development of NT's networking subsystem, ensuring that the product would work not just with Microsoft's products but with the outside world.
Here's how it all began.
"We came together as a group in November 1988," Lucovsky told us, noting that the first task for the NT team was to get development machines, which were [then] top-of-the-line 25 MHz 386 PCs with 110 MB hard drives and 13 MB of RAM. "They were ungodly expensive," he said, laughing. The first two weeks of development were fairly uneventful, with the NT team using Microsoft Word to create the original design documentation.
Mark Lucovsky, 2003
Finally, it was time to start writing some code. "We checked the first code pieces in around mid-December 1988," Lucovsky said, "and had a very basic system kind of booting on a simulator of the Intel i860 (which was codenamed "N-Ten") by January." In fact, this is where NT actually got its name, Lucovsky revealed, adding that the "new technology" moniker was added after the fact in a rare spurt of product marketing by the original NT team members. "Originally, we were targeting NT to the Intel i860, a RISC processor that was horribly behind schedule. Because we didn't have any i860 machines in-house to test on, we used an i860 simulator. That's why we called it NT, because it worked on the 'N-Ten.'"
The newly named NT team had a basic kernel mode system up and running on the simulator by April 1989. "We started with five guys from DEC and one from the 'outside' (i.e. Microsoft), a guy named Steve Wood," Lucovsky said. "And we stayed a tiny group for a long time, through the summer. We thought, 'How hard could it be to build an OS?' and scheduled 18 months to build NT. But we had forgotten about some of the important stuff--user mode, networking, and so on."
By late 1989, the NT group began growing. They added a formal networking team and expanded the security team beyond a single individual who, incidentally, had also been previously burdened by file system and localization development. "We grew that first year to 50 people or so," Lucovsky said. "And within a year, we finally had the first functioning i860 prototypes, so we could use those instead of the simulators. We started looking at context switch times, to get an idea of how well it would perform. It became obvious almost immediately that the i860 would never work out. So we started looking at the MIPS architecture, another RISC design."
In December 1989, the NT team made the decision to ditch the i860 and target the MIPS R3000chip instead. "Within two or three months, we were booting NT on real hardware in Big Endian mode," Lucovsky told us, "and our architecture really paid off. We had designed NT to be portable, and we proved it would work almost immediately when we moved to MIPS. We made the change without a lot of pain."
By this time, the NT team started expanding rapidly, with most of its members now coming from the ranks at Microsoft. The graphics team was greatly expanded, once a new style of doing graphics was created. They also started an NT port to the Intel i386, which was the mainstream PC processor at the time, but Lucovsky explained why it was important to the team that they didn't target the i386 initially. "We stayed away from the 386 for a while to avoid getting sucked into the architecture," he said. "We didn't want to use non-portability assumptions." If they had targeted Intel's volume chip from day one, he said, they would have had a higher performing system initially, but it would have hurt NT in the long run, and made it harder to pursue new architectures as they did recently with the 64-bit Itanium versions of Windows Server 2003.
"By the spring of 1990, we had the MIPS version limping along and we started the 386 version in earnest," Lucovsky said. "It was another huge growth spurt." That May, Microsoft released Windows 3.0 and, suddenly, the world took notice. Windows was a smash success, and the obvious future of PC-based graphical computing. "We started looking at Windows 3.0 and said, 'What if, instead of OS/2, we did a 32-bit version of Windows?'" Lucovsky noted, casually throwing out the question on which the next decade of computing hinged. "Four guys--Steve Wood, Scott Ludwig, a guy from the graphics engine group, and myself--looked at the 16-bit Windows APIs and figured out what it would take to stretch them to 32-bit. We spent a month and a half prepping the API set, and then presented it to a 100-person design preview group to see what they thought."
The original pre-release SDK for the Windows NT Win32 APIs
The key characteristic of the new API, eventually named Win32, is that, though it was a new API, it looked and acted just like the 16-bit Windows APIs, letting developers easily move to the new system and port their applications. "We made it possible to move 16-bit applications to NT very easily," Lucovsky said, "and these applications could take advantage of the unique features of NT, such as the larger address space. We also added new APIs that weren't in the 16-bit version. We added major new functionality to complete the API, making it a complete OS API, but we did this using a style that would be familiar to the emerging body of Windows programmers."
The reaction within Microsoft was immediate. "They loved it," he said, "when they saw how easy it would be. It was basically Windows on steroids, and not OS/2, which used a completely different programming model." Making NT a 32-bit Windows version instead of an OS/2 product, however, introduced new issues, not all of which were technical. Microsoft had to get ISV and OEM approval, and of course alert IBM to the change. "We did an ISV preview with IBM, and had this deck of about 20 slides, and we said, 'look, this is what we're going to do.' At first, they thought Win32 was a fancy name for OS/2. Then you could just see it on their faces: 'Wait a second, this isn't OS/2.'"
The decision to drop OS/2 for Windows forever damaged the relationship between the two companies. "But we had executive approval, and started the port," Lucovsky said. "So instead of working on an OS/2 subsystem for NT, we picked up Win32." At that moment, he said, the product became Windows NT.
NT's modular architecture paid off during this change as well. "Thanks to our microkernel architecture, with the kernel decoupled from application environments like POSIX and Win32, we didn't have to change the kernel or start a new programming effort," Lucovsky told us. "The deep guts of the scheduler didn't have to change. We had C command line applications up and running within two weeks. This was September 1990."
Thompson elaborated on the importance of NT's foundations. "Our core architecture is so solid, that we were able to take NT from 386-25's in 1990 to today's embedded devices, 64-way, 64-bit multiprocessor machines, and $1000 scale-out server blades. We've been able to deliver a whole array of services on it."
September 1990, truly, was the turning point for Windows NT. Not coincidentally, that's also when Dave Thompson, previously heading Microsoft's LANMAN for OS/2 3.1 advanced development team, joined the NT team. "We threw the switch," Thompson told us, "and the team went from 28 to about 300 people. We had our first real product plan."
Dave Thompson, 2003
The first version of Windows NT, Windows NT 3.1, was released in July 1993 and named to match the version number of the then-current 16-bit Windows product. That NT version featured desktop and server editions and distributed security in the form of domains. Since then, the NT team has worked on a progression of releases, all developed on the same underlying code base.
The next release, Windows NT 3.5, was code-named Daytona, and shipped in September 1994. "Daytona was a very rewarding project," Thompson said. "We focused on size and performance issues, and on "finishing" many of the first-release features of 3.1. Daytona also had significant functional improvements and enhancements." The original themes for Daytona were size, performance, compression, and Netware compatibility. Two of those goals were emblematic of the time: DoubleSpace-style compression was a hot topic in the early 1990's because disk space was at such a premium, and Netware was the dominant network operating system of the day. "We eventually dropped compression," Thompson said, "but the Netware port was strategic. Novell was ambivalent about the NT desktop – they didn't know if they wanted to build a client. We offered our assistance, but they kept messing around and ... well. We did our own. And it just blew them away. Ours was the better Netware client, and customers used ours for years, even after they finally did one. That client enabled the NT desktop, because Netware was the prevalent server in the market. We wouldn't have been able to sell NT desktops otherwise."
Daytona also benefited from new compiler technology which enabled Microsoft to compress the code size and enable realistic NT desktops on lower-end systems than the original version. "The results were measurable," Thompson said.
Windows NT 3.51 was dubbed the Power PC release, because it was designed around the Power PC version of NT, which was originally supposed to ship in version 3.5. But IBM constantly delayed the Power PC chips, necessitating a separate NT release. "NT 3.51 was a very unrewarding release," Thompson said, contrasting it with Daytona. "After Daytona was completed, we basically sat around for 9 months fixing bugs while we waited for IBM to finish the Power PC hardware. But because of this, NT 3.51 was a solid release, and our customers loved it." NT 3.51 eventually shipped in May 1995.
Fittingly, the next NT release, Windows NT 4.0, became known as the Shell Update Release (SUR), another challenging task that would once again prove the benefits of NT's module architecture. "We wanted to build a desktop that had the 95 shell but used NT technology," Lucovsky told us. "We eventually moved the Win32 GUI components and hosted them as an in-process driver. Performance was one side effect. We had had problems taking that API and running it in a different process. So moving the code to the same context as the runtime solved a lot of issues. We didn't have to do dead lock detection for GDI and USER. It was significant work, but it solved a lot of headaches." NT 4.0, a watershed release for the product, shipped in July 1996.
With the next release, Windows NT would lose the NT name and become, simply, Windows. Thompson says the decision came from the marketing team. "A guy from the Windows [9x] marketing team moved over to NT marketing and said we should use Windows everywhere. We were all uncomfortable with the name change at first, because NT had a solid reputation. But because of the reliability push with Windows 2000, people started talking about how much better Windows 2000 was than 'that old NT stuff,' even though it was the same architecture. So it was actually kind of fortuitous how it happened." Incidentally, Windows 2000 didn't have a codename "because Jim Allchin didn't like codenames," Thompson says.
Since the completion of Windows 2000, the biggest decision the Windows team made was to split the client and server releases with the Whistler products, which became Windows XP and Windows Server 2003. "This lets us focus on the server customers, who want it rock solid, rather than right now," Thompson told us. "Desktop software has to ship in sync with [PC maker] sales cycles. There is no holiday rush with servers."
One element about the NT family of operating systems--which evolved from Windows NT to Windows 2000, XP, and, now, Windows Server 2003--that has remained unchanged over the years, though the details have changed dramatically, is the build process. Somewhere deep in the bowels of Microsoft, virtually every day, at least one Windows product is compiled, or built, into executable code that can be tested internally by the dev, or development teams. For Windows Server 2003, this process is consummated in Building 26 on Microsoft's Redmond campus, where banks of PCs and CD duplicating machines churn almost constantly under the watchful eyes of several engineers.
NT Build Lab!
The details of NT--excuse me, Windows--development have changed dramatically since the project first started in the late 1980's. "Back in the early days, we started with 6 people," Microsoft Distinguished Engineer and Windows Server Architect Mark Lucovsky told me. "Now there are 5000 members of the Windows team, plus an additional 5000 contributing partners, generating over 50 million lines of code for Windows Server 2003. Getting all those people going in the same direction, cranking out code, is an enormous task. Building the results of their work, compiling and linking it into the executable and other components that make up a Windows CD is a 12 to 13 hour process that is done every day of the week. It's an enormous task, the biggest software engineering task ever attempted. There are no other software projects like this." And Microsoft compiles the whole thing--all 50+ million lines of code, almost every single day, he said. "We're evolving the development environment all the time," Lucovsky noted.
"When we turn the crank, we compile the whole thing," he said. "We have to be able to reproduce the system at any point in time as well. So developers check in code, we press a button, and out comes a system. We should be able to reproduce that [build] three years in the future, using the various tools, compilers, and scripts we used at that time."
Signed Software Release Forms for Windows 2000
David Thompson, corporate vice president of the Windows Server Product Group at Microsoft, elaborated on the process. "The key here is that we built up the system over the years, advancing it in three dimensions," he said. "First is the product itself. Second is the way we engineer the product. And third is the way we interact with a broader and broader set of customers. The product evolution is pretty straightforward. The source code control system we use now is new, because we really pushed the scale of the previous version with Windows 2000. Mark [Lucovsky] personally led the development of the new system and introduced it post-2000. We started with some acquired technology. We now do have a staged build [for the first time]. But every day the [staged builds] are rolled up into the total build. So we can scale but maintain stability – we know where we stand, every day."
Just eat it: Microsoft serves up dog food
Lucovsky reminisced a bit about the early days, when the first NT prototypes were built in his office with only a single person overseeing the process. That person would simply send out an email to the NT team when a new build was ready, and then 50 people or so would "eat their own dog food," testing the build on their own systems and run stress tests. "I used to just walk around the building and write down the problems we found," Lucovsky said. "That's how it was pre-NT 3.51. Now we have 7 builds labs. Dave [Thompson] has his own [build lab] for the 1200 people he oversees. The main build lab cranks out the official build, which goes out to thousands of people daily. Notification is automatic, and is sent out in multiple stages using the backbone servers across the campus. It's all automated. Those little things have now scaled up."
"Originally, we had a certain time of day [up to which time] we could check code in and then we stopped," Thompson said. "After that, we threw the switch and built the new system. Eventually, we grew the team to 85 people and serialized the process for more control. At one point, to drive for stability, [NT architect] Dave Cutler--who we all worked for---ran the build lab for about a week, and he required people to personally write their checkin requests on a whiteboard in the lab. He forced it into a mold. I sat in there for a while too. One day I accepted 85 check-ins, the most we had ever had to that point. Now we can take in over 1000 every day. It's a completely different scale. Even the whiteboard is electronic--Web based, actually--now."
"There are no other software projects like this," Lucovsky said, "but the one thing that's remained constant [over the years] is how long it takes to build [Windows]. No matter which generation of the product, it takes 12 hours to compile and link the system." Even with the increase in processing horsepower over the years, Windows has grown to match, and the development process has become far more sophisticated, so that Microsoft does more code analysis as part of the daily build. "The CPUs in the build lab are pegged constantly for 12 hours," he said. "We've adapted the process since Windows 2000. Now, we decompose the source (code) tree into independent source trees, and use a new build environment. It's a multi-machine environment that lets us turn the crank faster. But because of all the new code analysis, it still takes 12 hours."
Dogfooding their code has always been a key requirement of the NT team, Thompson told me, and an integral component of Microsoft's culture. "This is one of the things we've always done, back to the earliest days," he said. "We were just joking about this today, actually, talking about our email program. Back when we first got NT running on desktop [PCs], our email program wouldn't run because it was a DOS application, and we didn't have DOS compatibility mode working yet. So I ported our internal email app, WizMail, to Win32 so we would be able to use only NT systems."
"When you are forced to use the system yourself, you see bugs and you see the performance issues," Thompson added. "And you'd go and find the person responsible for the problem and “ask” them to fix it." One of Thompson's primary responsibilities when he joined the NT team was to deliver the file server over to NT so that it could be used as the source code server. That required a moment of faith, especially since NT was then using a prototype version of the NTFS file system. "The networking group took this very seriously," he said, "and made sure it was ready for internal deployment. Once it was rolled out, we never backed away. Obviously, if the file server goes down, it's a disaster. So it was a big moment for us, getting over that hump."
Later, as the development of Windows NT 4.0 wound down, Thompson's team took on Active Directory (AD), Microsoft's first directory service, which debuted publicly at the Professional Developers Conference (PDC) in 1996. "Before AD we had NT domains for our infrastructure," he said, "and going to AD was even more complex. We deployed AD very early, first with our team, and then the wider Windows group. Then we threw the switch on Redmond [campus] AD in April 1999, [some four months before Windows NT 5.0/Windows 2000 Beta 1 shipped]."
Microsoft rolled out AD to the rest of the company in stages, Thompson said, using careful planning. The campus went to a multi-forest AD topology with Windows Server 2003 last year. "With all of the server and infrastructure servers, we always do a complete deployment internally, then push it out to the JDP (Joint Development Partners), who test and deploy it in production it in over 250 usage scenarios. We get bug reports, feature feedback, and complex scenario testing that really proves the product."
Windows Server 2003 hit 99.995 percent availability at the Release Candidate 1 (RC1) stage last summer, and the Microsoft.com Web site was fully deployed on Windows Server 2003 when RC2 rolled out in November 2002. "Heavy usage internally and by close customers is key," Thompson told me, "and we have a more mature view of what the product is now [compared to the early days]. We're not just shipping bits in a box, but are also shipping a wide range of complementary tools, products, services, and documentation." And Thompson explained that the teams working on Outlook 11, Exchange Server 2003 ("Titanium") and Windows Server 2003 are all working much more closely together to implement complete end-to-end scenarios that meet customer needs. In the past, these products were often developed more independently.
Are you being served? A look at product maintenance
"Servicing has definitely matured over the years," Lucovsky added. "We do a lot of work figuring out the right mix of service packs, hot-fixes, [product] development branches, betas, and JDP customers for each product." (More information about development branches can be found in the next section.)
"We've really extended the time that we service our products," Thompson said, because when Microsoft ships a server product, customers may use it for up to ten years. So-called volume, or mainstream, service lasts seven years, but the company has constantly evolved the way it supplies updates and fixes over time. First, Microsoft has to be sure that bug fixes are applied to all of the applicable development branches. "Our work in rapidly addressing security vulnerabilities means that we now aggressively issue hot-fixes when we can," Thompson noted. "As well, it used to be that [service packs] were flexible, a way that we would deliver features as well as fixes. But customers made it clear that they wanted bug fixes only [in service packs]. That leads to an interesting question, though: What, exactly, is a bug? Is a missing feature a bug? Customers often have different views themselves. But [Windows] NT 4 SP3 was the end [of major new features in services packs].
One side effect of trunk servicing is that Microsoft must maintain test environments for every permutation of its recent operating systems. That means that the final, or "gold" release of Windows 2000 is one branch, Windows 2000 SP1 is another, Windows 2000 SP2 is another, and so on. "And dogfooding is important to proving service packs, too. In our IT organization, we maintain a Windows 2000 infrastructure just so we can do live rollouts to Windows 2000 systems and test them in a production situation," Thompson said. "It's a big expense, but worth it"
Hot-fixes are treated as narrow releases that should fix only one specific problem and not affect other parts of the system. Thompson said that customers should generally only apply a hot-fix if they're affected by the problem the fix addresses. However, security fixes are another issue altogether. "We expect all of our customers to install the security fixes," he said, "so we are very careful with them, and do the right kind of testing. They are Generally Deployable Releases (GDRs), just like service packs."
Trunks, trees and branches
As noted earlier, the various Windows versions require a series of product development code forks, where each different Windows product "branches" off the main development "trunk" over time. So each Windows release builds off the last, and at least two different versions--Windows Server 2003 and Longhorn, at the time of this writing--are in simultaneously development. Because Windows Server 2003 was split from XP, the server product basically builds on XP. Longhorn, a client release that will succeed XP in a few years, is actually building off the server branch code base, and not XP as you might expect.
"The mechanics of doing this are mind-numbing," Lucovsky told me. "We have a main branch of code for the current Windows version, and that branch becomes the source base for hot-fixes and the next service pack. Once we spit out a service pack, that becomes a branch and now we have two branches we have to test for hot-fixes and service packs. We can't tell customers to install, say, SP1 and then do this hot-fix. And this is going on for every [Windows] release, so some have 2 or 3 service packs, many hot-fixes, and many security fixes. Every one of these is a managed collection of 50 million lines of code. It's a pretty big accounting issue."
Additionally, for each main branch in active development, we also have roughly 16 team level branches to allow team level independence/parallelism while working on a common main line branch. Each team maintains a complete build lab environment that builds an entire release including the team’s latest changes and periodically integrates their tested changes back into the associated main branch so that others can see their tested work.
Going to War: Triaging Bugs in the War Room
During the mad dash towards RTM, the heartbeat of the project is the War Room, where the War Team meets two to three times daily, five days a week--six days a week now that Windows Server is in its final days of development. "The War Team goes over reports and metrics to see where the project is at every day," Thompson told us, an understated explanation that did little to prepare us for the horrors of the War Room. "Everything is automated now, but back then we came in and passed around paper reports that showed us how we were doing. There were, maybe, 15 to 20 people in the room. Now it's very different."
It sure is.
For Windows Server 2003, the War Room is run by Todd Wanke, who we eventually found to be an amazingly likeable guy. However, in the hour-long War Room sessions, Wanke rules with an iron fist, asking trusted lieutenants for advice here and there, but moving the process inexorably forward with little patience for excuses or, God forbid, product team members who don't show up for War Team.
Here's how it works. Every morning at 9:30 a.m., representatives from various Windows Server 2003 feature teams meet to triage bugs. They file into conference room 3243--whose exterior sign has been covered up by a handwritten note that reads "argument clinic"--in building 26. There's a large conference table in the center of the room, but most of the participants have to stand, and the room is always overflowing with people. On the day we attended a War Team meeting--the first time any outsiders were allowed to view the inner sanctum for Windows Server, and only the second time overall during the entire development of NT and Windows--the team progressed through about 50 bugs, most of which were simple branding problems, though I've agreed not to discuss the specifics of any bugs discussed that day. (Because we attended War Room very late in the development of the product, and the biggest outstanding issue was the last minute name-change from Windows .NET Server 2003 to Windows Server 2003.)
Every bug is logged in an incredible bug tracking system, each accompanied by a dizzying array of information about how the bug was found, which customers, if any, were affected, and a complete history of the efforts made to date to eradicate the problem. Wanke moved quickly through the bugs, calling out to members of specific feature teams to explain how the fixes were progressing. If there are one or more bugs in IIS, for example, a representative of the IIS team needs to be present to not only explain the merits of the bug, but whether customers are affected, how the fix might affect other parts of the system, and how soon it will be fixed. This late in the development process, bugs are often passed along, or "punted," to the next Windows release--Longhorn--if they're not sufficiently problematic.
The atmosphere in War Room is intimidating, and I spent most of my time in the room, silent and almost cowering, praying that Wanke wouldn't turn his attention to me or my group. Heated argument and cursing are a given in War Room, and the penalty for not being on top of your bugs is swift and cruel ridicule from the other team members. The most virulent treatment, naturally, is saved for those foolish enough to blow off a War Room meeting. On the day I attended, one feature group had four of its bugs punted to Longhorn because they had failed to show up for War Room. When someone argued that they should be given another day, Wanke simply said, "Fuck 'em. If it was that important, they would have been here. It's in Longhorn. Next bug."
Once the hour long meeting was over, we sat down and spoke with Wanke, who was almost a completely different person in private. "You run a mean meeting, Todd," I told him, as we sat down. Wanke's background includes stints with NCR, America Honda and an unspecified and mysterious sounding security-related assignment as a US government contractor, and he's been with Microsoft for nearly eight years. Before joining the Windows team, Wanke was one of the original architects of the Microsoft.com Web site and he spent three or four years as an "Internet guy" at the company before all of Microsoft found the Internet religion. In our meeting, Wanke explained how he fell into his new job, what he does now at Microsoft, and how the War Team works.
"My job is to manage the day-to-day operations with regards to shipping Windows," he said. "I'm responsible for 8000 to 10,000 developers, program managers, and testers, and I have to make sure they're doing the right things every day."
War Team, he said, consists of a very broad set of people within the Windows team, all of whom are responsible for different areas of the project. They are test leads with responsibility for such things as TCP-IP and other low-level technologies, some developers, people that do the build every day, people that do build verification tests, and others. "Every area of the project is represented," he told us. "The daily marching orders [for the Windows Server team] come from War Team, and also from the broad mails I send out. These emails are almost always Microsoft confidential, or even higher than that, emails that are very confidential and sent only to a much smaller group of people."
As we witnessed, War Room is a very structured event, occurring at the same time every day and lasting exactly one hour. The team members look at the same bug system every day, and often go over the same bugs until they are fixed. "If you're not there, it's not good," he said." Microsoft people have a strong sense of ownership for the product and they want to make sure the right thing is happening. But if people aren't there, I lay into them. I'm the ass kicker."
In addition to the morning War Room meeting, the Windows Server team holds an afternoon meeting from 2 to 3 p.m. and, if needed, another one from 5 to 6 p.m. The daily build usually starts at 4:30, but can be delayed to 6, so this last meeting gives the team a chance to go over any final bug fixes that will be added to that day's build. "The structure is very important," he said, "and we need to know where the build is at all times. We look at the quality of the build, various stress levels, and all of the things that run overnight, anything that we need to follow-up on. We get detailed reports, and review everything that goes into the project."
In addition to the main War Team, each of the feature teams have their own War Rooms, so there could be as many as 50 such meetings each day, each going over a specific component of the system. These other War Room meetings occur at 8 a.m., every day. When a bug fix passes the local War Team process, it's introduced at Wanke's meeting. "They can't come into War Room unless they're fix-ready," Wanke said. "They must be fix-ready." Because there isn't a single person making decisions, there is a system of checks and balances through which each bug fix passes before it's introduced into the build.
The complexities of building Windows are staggering. "To simplify things, let's say Windows consists of 100,000 files," he said. "Usually, there are seven source code depots, each containing an exact replica of all of the sources, though at this point, we're down to just one. Every development group has its own depot, so that when a developer writes a fix, he can compile it into the depot for testing. If the build compiles locally with his fix, they can test it there and then check it into the main depot in the main build lab."
Not every build is successful, of course. Occasionally, Windows Server suffers from what Microsoft calls "build on the floor," when a fix breaks some other part of the system, rendering the build unusable. "That's brutal," Wanke told us. "There was a point about a year ago, when we didn't get a build out for seven days. We had to send an email to the product group executives at the company explaining the problem," and the company entered into its private version of Defcon-5. "All the red flags went up," he said. "It's very ingrained in the developers not to break the build. They do their fix, do a buddy build, and then check it in. But they can't go home. We've sent out calls at 3 a.m. when the build is broken, find the developer that broke it, and get him into work right then and fix it immediately. The developers are on call 24 hours a day. There's definitely an escalation process. A broken build is considered a critical, severity-1 problem."
As the Windows Server 2003 development cycle wound down, the bug count fell dramatically, and the process was getting simpler each day. And then Microsoft announced the name change. "We just have to live with that poor decision," he told us. "They should have made it six months ago. Back then, we all agreed it was the right thing to do. But at this late stage--they brought in [CEO] Steve Ballmer to talk with all the War Teamers about why we made the change." The speed at which the team was able to fix all of the branding graphics, text, and registry entries in the system is a testament to the company's dynamic process for fixing bugs, Wanke said. The problem was that several thousand changes needed to be made, and that would normally require several thousand new entries in the product's bug tracking system. "I went out and handpicked the three best developers on the team and said, 'just go and fix it.' One developer fixed over 7,000 references to [Windows] .NET Server. Let's just say that there are people I trust, and people I don't trust. I told these guys, 'don't tell me what you're doing. Just do it.'"
Entering the home stretch
On the day that we attended War Room, on January 21, 2003, Windows Server 2003 had hit an "absolute historic low" for bugs, according to Wanke. "We're shutting down the project this week," he said. "It's done. We're going to ship it." On that day, Windows Server 2003 had just a few active bugs, and at least a quarter to one-third of those bugs were simple branding issues. "So let's say there are about 150 outstanding issues to address," Wanke told us. "Of that, we'll fix about 100. All of the bugs are severity rated from 1 to 3, plus they get a priority rating. We have a few severity-1 bugs left to fix, and those all have to be fixed for us to ship."
Todd Wanke, War Room
Wanke said that the server team had already fixed all of the known security vulnerabilities. "We're very happy about security," he said. "It's fun to see where we are [with security]. I'm personally very impressed with the work that went into it, the fixes and the thought process. We all think it's very secure. The [Trustworthy Computing] security push [last year] was a big milestone for us, and everything will be easier going forward because of it. It's easier on the developers because they all have the same mindset and goals now, the same education about best practices. There used to be different methodologies between different groups. The security pushed unified it. Now it's easier for everyone to communicate and see the end goal."
With the completion of Windows Server 2003 development, the development team will enter a transitional period. First, the product will enter escrow, and the build process will be frozen. That build is then deployed around the campus, including Microsoft's corporate infrastructure. "That is the final build," Wanke noted. "Then we sit on it for a period of time, during which there are no core fixes made to the product." The escrow build will also be handed out to testers and JDP members, he said."
If any issues do arise during the escrow period, the War Team makes case-by-case decisions about whether to fix the bugs. If a bug necessitates a kernel fix, a new build will be created, and escrow is reset. "A change to a core component could delay RTM," Wanke told us. "We run it prior to asking customers to, and have to run it a number of days before signing off on it. It's a long haul." Every feature team working on Windows Server 2003 must run the escrow build for 21 days without restarting before the build can be declared golden master and released to manufacturing.
But Wanke isn't worried about the exact schedule, as the outcome is finally a foregone conclusion after years of work. His team is now preparing its RTM party--outside on one of the campus' many soccer fields, weather permitting; inside a garage if not--and Wanke has other RTM-related concerns he must address, including the launch venue. "I'm working with the launch team to book a venue," he said. "They need 95 percent confidence dates." They're also talking to OEMs to ensure systems are ready for launch, ISVs, marketing folks for signs and posters, and so on. "And I have to make sure that the 8000 people who deserve a ship award get one," he added.
In the end, all this dedication will result in the most secure and reliable operating system Microsoft has ever created, and it's impossible to overstate Wanke's contribution to this project. "I basically haven't missed a single War Team in a year and a half -- excluding a day or so for personal reasons," he said, "every day, six days a week at the end of the schedule. We let people bring their kids in on Saturdays, it's a family day. There's no swearing allowed on Saturdays. But you still have to be there, and we still have to make a build."
Though I understand that managing this project is a one-shot deal, due to the time and stress requirements, I just had to ask: Would Wanke run War Team on a future Windows version?
"No way," he said, laughing. "No way."
Windows: A Software Engineering Odyssey
Here's a special extra: Mark Lucovsky's presentation about the history of NT.