Since the summer of 1999, every new Microsoft software product that has come out of Redmond has used the .NET brand. The .NET brand comprises a set of Commercial Off-the-Shelf (COTS) applications that run on top of the Windows 2000 family of OSs. These applications include SQL Server 2000, Commerce Server 2000, BizTalk Server 2000, Exchange 2000 Server, Application Center 2000 Server, Mobile Information 2001 Server, and Internet Security and Acceleration (ISA) Server 2000. Microsoft also uses the .NET brand to signify an application-development architecture and methodology known as the .NET framework. Because Microsoft's intentions for the .NET brand are so broad, the technology will affect every IT infrastructure that's rooted on Microsoft technology or that uses mission-critical Microsoft applications.
When you plan your .NET infrastructure, you need to think about what security services you want to implement. A short list of essential security services should include strong authentication, data confidentiality and integrity protection (for data sent across the network and for data stored on any type of storage medium), nonrepudiation services, and antireplay protection. These services, and the technologies that provide them, are important for any infrastructure—not only .NET. Of course, .NET-specific security features also exist. However, I focus on securing the broad .NET infrastructure rather than on the security features inherent in .NET's various applications.
To implement these essential security services, you need to master a few key technologies and essential design principles. The way you set up your security zones and firewalls, Intrusion Detection Systems (IDSs), authentication, authentication delegation, public key infrastructure (PKI), and platform hardening is essential to your .NET infrastructure.
A Typical .NET Infrastructure
Figure 1, page 36, shows a typical .NET infrastructure. The components that require protection are the Web servers, the business object servers, the directories, the Certificate Authorities (CAs), the enterprise resource planning (ERP) system, the database, and the communication links between components.
Internal and external client access to the .NET infrastructure is purely Web based. A typical .NET infrastructure also includes different availability and load-balancing solutions. At the Web server level, the Network Load Balancing Service (NLBS) that ships with Win2K Advanced Server, Win2K Datacenter Server, and Application Center 2000 provides high-availability and load-balancing capabilities. The SQL Server database and the ERP systems are clustered. Notice that the infrastructure also includes a COM+ business object cluster. You use Application Center 2000 to integrate and administer this COM+ cluster and the Web server's NLBS. (For more information about high-availability solutions, see David Chernicoff, "Components of a High-Availability System," November 2000.)
At the time of this writing, an in-depth discussion about some of .NET's applications' features isn't yet possible because some details of those features are unclear. However, given that Microsoft is building the .NET infrastructure on top of Win2K, we can presume that .NET will exploit Win2K's security features. Also, given the increasing importance of Internet-based protocols, we can presume that access to .NET infrastructure components will mostly occur through standard Internet protocols (e.g., HTTP, SMTP, Lightweight Directory Access Protocol—LDAP).
Security Zones and Firewalls
When you plan your secure .NET infrastructure, you'll need to evaluate your current corporate security zones and firewalls. Your .NET infrastructure might require you to make some alterations. Even if the application you're building doesn't interact with external entities, you need to consider firewalls. Internal firewalls can protect sites or departments that have special security requirements and can restrict access to parts of your internal network (in the event that an intruder breaches the firewalls that are part of your external security perimeter).
Two common questions related to security zones and firewalls might arise as you're designing your infrastructure. The first question is, Where should I place my data and my servers? The security zone in which you should place your data and servers depends on what you're using the data or server for. Evaluate the confidentiality of the data that your servers process and store, then store confidential customer data in your trusted security zone.
If your organization uses a demilitarized zone (DMZ) in its topology, as Figure 1 shows, complications will arise. Of course, you should place only public data in a DMZ. However, the integrity of even public data requires protection. During the 2000 presidential elections, neither candidate would have wanted an intruder to alter the vote count that CNN published on its Web site. (On second thought, perhaps Al Gore wouldn't have minded.)
The second question is, How do I deal with Remote procedure calls (RPCs) and Secure Sockets Layer (SSL)/Transport Layer Security (TLS) traffic that moves between entities on opposite sides of a firewall? Many commercial Web sites use SSL/TLS to provide a basic set of security services to customers. This basic service set typically includes server authentication, client authentication, and integrity and confidentiality protection for data transmitted between the SSL client and server. To deal with SSL traffic at the firewall level, you can take one of two approaches: SSL tunneling, which Figure 2, page 38, shows, or SSL bridging, which Figure 3, page 38, shows. Most commercial firewalls support SSL tunneling, which uses the HTTP CONNECT message to tell the firewall to disregard the content of a particular SSL session and to simply forward the SSL packets. In an SSL bridging setup, the firewall is SSL-aware and holds a proper SSL certificate. The firewall acts as an SSL tunnel endpoint. SSL bridging might be a good solution not only to deal with SSL traffic at the firewall level but also to enable SSL on the public (i.e., untrusted) portion of the communication channel in conjunction with non-SSL-enabled applications.
RPCs let a client interact with a remote application. All the components involved in the .NET infrastructure that Figure 1 shows can use RPCs to communicate. Of particular interest to this discussion is the use of RPCs for intercomponent communication in Microsoft's distributed component models (i.e., Distributed COM—DCOM—and COM+) and for Active Directory (AD) replication.
RPCs are easy to set up—if no firewalls are involved. A nasty feature of RPCs is their use of dynamic inbound ports, which makes predicting which ports will be used impossible. For obvious security reasons, few firewall administrators are willing to open all possible ports. For more information about this problem, see the Microsoft article "HOWTO: Configure RPC Dynamic Port Allocation to Work with Firewall" (http://support.microsoft.com/support/kb/articles/q154/5/96.asp).
How can you deal with RPCs and firewalls for DCOM and COM+ communications? First, you should never use RPCs to connect an Internet client to your .NET infrastructure. Instead, choose an HTTP-based solution. As long as HTTP doesn't tunnel inside SSL or IP Security (IPSec), HTTP is easier to control and uses a fixed inbound port (port 80, by default). However, sometimes you need to cope with the problem of setting up RPCs across firewalls for DCOM and COM+ communications. Suppose, in the infrastructure that Figure 1 shows, you need to initiate COM+ communication between a Web server in the DMZ and a COM+ business object server in the trusted zone. In such a scenario, you can use RPCs to enable intercomponent communication through Tunneling TCP or Simple Object Access Protocol (SOAP).
Tunneling TCP adds an HTTP-based "handshake" at the start of the DCOM communication sequence. After this handshake, Tunneling TCP sends ordinary DCOM packets over TCP without HTTP intervention. Tunneling TCP relies on the RPC_CONNECT message and requires the presence of an RPC proxy. To install an RPC proxy on Win2K, open the Control Panel Add/Remove Programs applet and add the COM Internet Services Proxy (CIS Proxy) Networking Service. To configure Tunneling TCP on the Win2K or Windows NT 4.0 client side, use the dcomcnfg.exe configuration utility. For information about configuring Tunneling TCP and the CIS Proxy, see Marc Levy's Microsoft Developer Network (MSDN) article "COM Internet Services" (http://msdn.microsoft.com/library/backgrnd/html/cis.htm). Although Tunneling TCP and the CIS Proxy resolve the firewall problem, they're dependent on the Microsoft platform.
SOAP, one of the buzzwords of the .NET framework, is a fairly new technology. Microsoft, Lotus, and IBM specialists have defined SOAP as a platform-independent protocol. One of SOAP's primary benefits is that it provides a method to transport RPC traffic by using HTTP-protocol messages. In other words, SOAP lets a client use the HTTP protocol to communicate with a remote component-based application.
To embed the RPC information into HTTP, SOAP uses XML encoding. Unlike Tunneling TCP, SOAP relies on HTTP for more than an initial handshake. For more information about SOAP, see Aaron Skonnard's MSDN article "SOAP: The Simple Object Access Protocol" (http://msdn.microsoft.com/library/periodic/period00/soap.htm).
Another RPC-based application that's particularly interesting in the context of .NET infrastructure security is AD replication between domain controllers (DCs) of the same domain that are members of different security zones. To date, Microsoft hasn't supported the SMTP protocol for replication of domain-naming context information. As a workaround, you can set up an IPSec-based tunneling solution. IPSec lets you implement tunneling on the network layer. To tunnel IPSec through a firewall, you need to open the following ports:
- 53/TCP and 53/UDP for DNS traffic
- 500/UDP for Internet Key Exchange (IKE) traffic
- 88/TCP and 88/UDP for Kerberos v5 traffic (in case you aren't using the preshared key or certificate-based authentication methods)
Also, you need to open the firewall to support the protocol traffic for the IPSec Encapsulating Security Payload (ESP)—protocol 50—and the IPSec Authentication Header (AH)—protocol 51.
Intrusion Detection Systems
Over the past year, IDSs have become a must-have security feature. IDSs, which are indispensable to the security of any .NET infrastructure, perform three primary tasks:
- An IDS monitors events that occur on computers and on the network.
- An IDS gathers and analyzes these events. Based on the analysis, the IDS can detect reported (i.e., published by an incident-reporting Web site, an IDS service, or a software provider) attacks that penetrated other protection layers of your security infrastructure.
- An IDS reacts in response to the detection of an attack. This reaction can include administrator notification, connection termination, or advanced data collection (i.e., gathering more information from more data sources than under usual circumstances).
You can use an IDS to detect not only reported attacks originating from outside your organization but also reported internal attacks. Too many organizations underestimate the risk of internal attacks by legitimate users. For example, a disgruntled employee might launch a Denial of Service (DoS) attack on your corporate directory. You can also utilize IDS as a set of services that can bridge the gap between your corporate security policy and your IT infrastructure's security components. Because an IDS reacts to attacks, it can dynamically enforce your security policy.
The two basic types of IDSs are host-based IDSs and network-based IDSs. A host-based IDS typically scans and monitors the local system state and data. In a Win2K environment, a host-based IDS server usually monitors log events, file-system events, and registry events. A network-based IDS system monitors network traffic—think of it as an advanced sniffer. A popular example of a host-based IDS is Symantec's (formerly AXENT Technologies') Intruder Alert. Symantec also offers a network-based solution called NetProwler. Internet Security Systems (ISS) offers a product called Real-Secure, which is both a host-based and network-based IDS.
Your .NET infrastructure might include different kinds of client and administrator interfaces. Depending on the interfaces' purpose and the data accessible from these interfaces, they might require different levels of authentication. Some interfaces and data, for example, are public and don't require any authentication.
Your choice of an authentication protocol depends on the access protocol and the client application you're dealing with. Across RPC-based connections, you can use Kerberos or NT LAN Manager (NTLM). HTTP-based connections give you more options: basic or digest authentication, Kerberos, NTLM, or client certificatebased authentication. If you have a heterogeneous set of client applications from different vendors, don't choose NTLM or Kerberos. (NTLM is a Microsoft-specific protocol, and Kerberos is an open standard but isn't widely supported.) Remember also that Kerberos and NTLM are available over HTTP only when the user is already logged on to a Windows domain. If most of your clients connect through the Internet, you'll probably prefer basic authentication over SSL for small user populations. For large user populations, you might consider supplementing the standard authentication protocols with an authentication method based on custom forms or cookies. You might even consider implementing Microsoft Passport, a single sign-on (SSO) technology intended for large Internet environments.
The quality of an authentication solution largely depends on the number of factors by which the solution uniquely identifies an entity. A plain password-based solution that relies solely on knowledge (e.g., of a user ID and password) offers a lower authentication quality than a solution that relies on both possession (e.g., of a smart card) and knowledge (e.g., of a PIN code). The former solution is a single-factor authentication method, whereas the latter is a two-factor authentication method. For .NET-based solutions in a security-sensitive environment, you might even consider three-factor authentication methods that combine knowledge, possession, and biometric data (e.g., a fingerprint).
Another factor that affects the quality of an authentication solution is the cryptographic technology behind it. An authentication protocol such as Kerberos or NTLM is typically easier to break than an SSL client certificatebased authentication scheme. The former uses symmetric ciphers, whereas the latter uses asymmetric ciphers. An exception is Kerberos PKINIT, which the Win2K smart card logon process uses. This extension of the Kerberos protocol uses both symmetric and asymmetric ciphers. (The use of asymmetric ciphers might require the rollout of a PKI.)
When you're designing authentication solutions for your .NET infrastructure, you also need to consider the credential database. The setup of your credential databases once more depends on what the authentication credentials are used for. (The password database for a public application needs less protection than the database containing the passwords to launch a nuclear missile.) In a Windows environment, AD is a common place to store credentials. In the example that Figure 1 shows, you might choose to store all credentials on AD servers in the trusted zone or you might choose to store a subset of the credentials on AD servers in the DMZ. To set up the latter scenario, you can either define a separate AD forest for the DMZ or define a separate AD domain for the DMZ and integrate it with your AD forest in the trusted zone. I recommend the former method (i.e., the separate AD forest in the DMZ) because it completely shields the trusted zone's accounts from the DMZ's accounts. This method also removes the need to use RPCs across the firewalls protecting your trusted network and lets you use LDAP over SSL to set up a secure synchronization mechanism between the internal and the DMZ AD domain.
Another essential technology to consider when you're designing a multi-tiered .NET infrastructure is authentication delegation. A multi-tiered .NET application consists of multiple authentication tiers—for example, between the client and the Web server, between the Web server and the COM+ business object server, and between the COM+ business object server and the database server, as Figure 4 shows. If you want to use the user's identity to set access control on data on the database server, the authentication protocols you use must support delegation. In other words, the authentication protocols need to be able to forward the user's credentials from machine to machine. Each machine uses the client's identity to authenticate to the next machine.
Table 1, page 42, illustrates how delegation support is related to the authentication protocol you use. For example, assume you're using basic authentication to authenticate a user working from his or her browser to a Web server on some remote server. To authenticate to a server (e.g., an Exchange server) that's one hop away from the Web server, the Web server can reuse the user's identity. If AD is installed (and the Kerberos authentication protocol is available), machines that are multiple hops away from the Web server can use the user's identity for authentication. The Web server process can use Kerberos and the client's credentials to authenticate to a COM+ server process. In turn, the COM+ server can use Kerberos and the client's identity to authenticate to a Microsoft SQL Server process (on yet another machine). Note that when you use anonymous access and Microsoft IIS password synchronization is enabled, you can't delegate the IUSR_servername account. The same is true when you use SSL/TLS-based certificate authentication and the certificate mappings are defined in AD. Also, remember that NTLM doesn't support delegation.
Figure 4 illustrates how to set up Kerberos delegation in a typical Win2K-based intranet environment. All components can use RPCs to communicate. The client interface is browser-based and thus communicates with the Web server over HTTP. Also, the user is already logged on to a domain through Kerberos or NTLM. In an Internet setup, the client would also communicate over HTTP but would generally not be logged on to a domain. Using Kerberos or NTLM over the Internet isn't a good idea anyway—these authentication protocols don't scale well in large environments and require the availability of an online trusted third party.
To use Kerberos from the browser to the database server, you need to meet the following conditions:
- All involved software must have access to and know how to use the Negotiate Security Support Provider (SSP) and the Kerberos SSP. SSPs are software modules that abstract the innards of an authentication scheme to applications and application developers. The Negotiate SSP is a special provider that negotiates the authentication protocol (i.e., Kerberos or NTLM) between a client and a server. At the time of this writing, the Kerberos SSP is available only on Win2K platforms. Microsoft Internet Explorer (IE) 5.5 and IE 5.0, Internet Information Services (IIS) 5.0, SQL Server 2000, Exchange 2000, and COM+ know how to use the Negotiate and Kerberos SSPs.
- The user must have an account in AD that doesn't have the Account is sensitive and cannot be delegated property enabled. To check this setting, go to the Microsoft Management Console (MMC) Active Directory Users and Computers snap-in, access an AD user object's Properties sheet, and select the Account tab, which Figure 5 shows. The service accounts of the services running IIS and SQL Server, as well as the identity used for the COM+ application, must be defined in AD. The IIS service account (by default, the machine account) and the COM+ application identity must be trusted for delegation. You set the Account is trusted for delegation property on the Account tab. To set a machine account, go to the object's General tab in the Active Users and Computers snap-in.
- The Web service must have a valid Service Principal Name (SPN) registered in AD. If the Web service's name is different from the IIS computer name, be sure to use the Microsoft Windows 2000 Resource Kit setspn.exe utility to register a custom SPN.
- The COM+ application must support the Delegate impersonation level.
Although authentication delegation is easy to set up in a Win2K environment, I strongly advise testing it and becoming familiar with it before you start implementing security for your .NET infrastructure. The technology behind authentication delegation is much more complex than the Active Directory Users and Computers GUI's configuration options.
Public Key Infrastructure
PKI gives you a set of services that can provide strong, asymmetric cryptography-based security services to multiple .NET applications and services. For example, you can use PKI to provide Secure MIME (S/MIME) services on BizTalk Server SMTP connections or to provide strong authentication and secure-channel services (i.e., SSL-based and TLS-based) to users who access your Web sites.
Setting up PKI in a .NET infrastructure such as the one that Figure 1 illustrates can be complex. In that scenario, PKI consists of four major components: an offline root CA, an online subordinate CA, a certificate-enrollment Web site running on the Web server farm (for Internet users), and a set of PKI-enabled clients, machines, and applications. PKI users and applications in the trusted zone obtain their certificates directly from the online issuing CA. In this setup, you'll need to cope once more with the problem of RPCs and firewalls—the enrollment Web site uses RPCs to communicate with the online issuing CA.
.NET infrastructure architects must answer three crucial PKI-related questions.
First, What will the PKI trust model look like? A PKI trust model defines the trust relationships between CAs and between CAs and PKI-enabled entities. Setting up a trust model can be daunting if you're involving commercial CAs or external partners that already have a PKI in place. A trust model must be able to answer the questions, Which CAs are trustworthy? and, more important, Which certificates and public keys are trustworthy? Although you can use cryptographic technology to verify most PKI trust relationships, you won't be able to verify the trust relationships between PKI-enabled entities and CA trust anchors. The trust notion between PKI-enabled entities and CA trust anchors is similar to human trust. You trust people to do something for you because you know them and you believe they'll do the right thing. Similarly, you believe that a CA trust anchor will issue trustworthy certificates because you know that the organization running the CA services has a reputation for doing so or because a representative of the CA's organization has convinced you of the CA's trustworthiness.
The second question is, Will you deploy an insourced, outsourced, or hybrid PKI? If you rely on commercial CAs, you'll become extremely dependent on them (simply because you've outsourced the management of a part of your security infrastructure to an external company). If you use PKI to provide strong security to applications that hold highly confidential or financial information, such dependency might be intolerable.
The third question is, How will you exchange certificates and certificate revocation lists between the PKI-enabled entities? You need to decide how you'll set up the PKI to make certificates and certificate revocation lists (CRLs) available to all the PKI-enabled entities. In a small environment, you can use a single LDAP-accessible directory, such as your internal Win2K AD. But the scenario becomes complicated if you must consider multiple directories (e.g., yours and one for each partner who's using your .NET solution).
Although you'll need to tackle many other challenges while setting up PKI in your .NET infrastructure, these questions address the most important subjects. Be prepared, and remember that designing your PKI won't be as simple as running the CA setup program in Win2K.
Platform hardening concerns the security of the Win2K platforms that your .NET infrastructure runs on. Hardening an OS platform is a never-ending task that administrators of all OSs face.
A primary platform-hardening task is to remain up-to-date with service pack and security hotfix installations. To deploy service packs and security hotfixes, you can use the application-deployment function of Microsoft Systems Management Server (SMS) or Group Policy Objects (GPOs).
One platform-hardening topic that received much attention in 2000 is protection against DoS attacks. From an availability perspective, DoS attacks can cause serious harm to your .NET infrastructure. You need to keep up-to-date about the latest DoS attacks. Microsoft and the Computer Emergency Response Team (CERT) regularly post DoS information on their Web sites. Microsoft has provided a set of registry subkeys (e.g., SynAttackProtect, EnableDeadGWDetect, EnablePMTUDiscovery) that harden the TCP/IP stack against DoS attacks. For additional information about these keys, see the Microsoft article "Security Considerations for Network Attacks" (http://www.microsoft.com/technet/security/dosrv.asp).
A great tool that can help you automate server hardening is Win2K's Security Configuration and Analysis (SCA) tool. For more information about Win2K hardening, see Microsoft's Security Web site (http://www.microsoft.com/technet/security) and the Windows IT Security Web site (http://www.ntsecurity.net). An excellent source of up-to-date security-incident information is the CERT Coordination Center (CERT/CC) at http://www.cert.org.
I've described some of the key technologies and design principles that you need to keep in mind when you're designing security for your .NET infrastructure. Of course, other security technologies and design principles exist, but the solutions I discuss represent an "essential" security suite. If you don't consider each recommendation, you might miss important opportunities to improve security. Worse, you might end up with a design that's an intruder's paradise.