Skip navigation

Installing Microsoft Cluster Server

Planning, preparing, and installing your cluster server software

Installing Microsoft Cluster Server (MSCS) software is straightforward and simple. Notice that I said the server software, not the cluster itself. Installing the cluster is a more difficult task because cluster is more than software: It's a carefully assembled collection of hardware and software that's part of your high-availability computing environment. This article will help you understand that the difficult part of installing MSCS is the preparation. Preparing your system for MSCS takes a lot of planning and hardware configuration. In addition, MSCS has day-to-day administrative rules that you must follow or you'll need to reinstall the MSCS software. I'll walk you through the process of planning for, preparing your system for, and installing MSCS. Let's begin with a look at MSCS's software requirements.

The Software
Because MSCS runs only on Windows NT Server, Enterprise Edition (NTS/E), you must first install two NTS/E systems, which function as the basic building blocks for a failover configuration. In fact, if you attempt to run the Cluster Setup program on a system that doesn't have NTS/E installed, Cluster Setup only presents the option of installing the Cluster Administrator program. As you can see in Screen 1, the NTS/E base CD-ROM looks similar to the CD-ROM for NT Server 4.0. The NTS/E CD-ROM includes the binaries for Alpha and Intel processors, clients, and support directories. The CD-ROM also includes Service Pack 3 (SP3).

Screen 2 displays the directory of the second, or component, CD-ROM. The component CD-ROM contains the additional components of NTS/E: MSCS, Microsoft Message Queue Server (MSMQ), Microsoft Transaction Server (MTS), and a support directory for these products. The CD-ROM also includes Microsoft's Internet programs: FrontPage, an upgrade to version 3.0 of Internet Information Server (IIS), Active Server Pages (ASP), Microsoft Index Server, NetShow On-Demand Server, and NetShow Live Server.

Configuration Rules
As you plan your MSCS installation, you need to be aware of these two preinstallation and configuration rules: Cluster nodes must be running NTS/E, and cluster nodes must belong to the same NT domain and cluster. The MSCS documentation does not mention that you can install cluster nodes in a domain controller-member server configuration, which means that a cluster member can function in any domain role. I prefer to install cluster nodes as servers and not domain controllers, which reduces the overhead on these systems. (When a server is a domain controller, the entire user accounts database goes into a nonpage pool.) Make your domain-role decisions carefully, because to change a server to a domain controller, you must reinstall NT. For flexibility, I prefer to install NTS/E systems as member servers or as a Backup Domain Controller (BDC)-member server combination.

Although you can format the node's system drives with the FAT file system, you must format the shared storage with NTFS. If you use NTFS on your system drives, you might want to consider the following strategy. Install two copies of NT. One copy is for everyday use, and the other copy lets you boot the system and still access the system drives. Installing two copies of NT will more than pay for the purchase price if something goes wrong with your production copy. You can also use the alternative copy to back up and restore systems. Doing so lets you make a file-level backup that eliminates concerns about changes to open system files on your production copy of NTS/E.

If you install and boot under the secondary copy of NT be very careful about accessing the shared disks. Since NTFS isn't a cluster file system the shared disks should only be accessible by one system at a time. Accessing the shared disks simultaneously from multiple systems will cause disk corruption. MSCS avoids this issue by using the Cluster Disk Driver to limit disk access to one node at a time. While this is a simple solution, if you are like most of my clients, you look forward to Compaq's release of their cluster file system and distributed lock manager that will allow shared access to clustered storage.

Administrative Rules
After you install your cluster nodes, you need to follow what Microsoft refers to as administrative rules, which involve things such as node names and TCP/IP addresses that you assign in the cluster planning phase. If you violate these rules, you must reinstall MSCS. The rules are:

  • You can't change cluster node computer names, or TCP/IP addresses. (Microsoft might have a workaround for changing the IP address by the time you read this.)
  • You can't reassign system drive letters.
  • You must delete disk resources in the Cluster Administrator program before you repartition disks on the shared storage bus.
  • You must restart cluster nodes if you change the partitions of disks on the shared bus.
  • You can't start nodes under operating systems (OSs) that let utilities make low-level writes to physical disks.
  • You must reapply service packs after software or hardware additions. Heed this administrative rule unless you want to end up with a system you can't log on to. This rule is the reason I recommend installing two copies of NTS/E. I've been locked out of systems after updating NT systems to support multiprocessors.

Network-Related Items
Because client access to the cluster will be over the network, and the cluster nodes communicate over the network, networking plays an important part in the cluster's setup. The cluster relies on TCP/IP and Microsoft networking over TCP/IP for setup and day-to-day operations. If your site has problems running with TCP/IP as the only transport protocol, you must address these problems before you install a cluster in your production environment. You need to configure your existing NT servers and clients to use Windows Internet Naming Service (WINS) and Domain Name System (DNS). Proper WINS configuration solves NetBIOS name-resolution issues. Typical problem areas include domains that span subnetworks, browsing, and network logons. Proper DNS configuration solves basic TCP/IP connectivity (name-to-IP-address resolution) issues. You can use LMHOST and HOST files as point fixes for these name-resolution problems, but doing so can complicate management.

Before you set up MSCS, familiarize yourself with the concept of a network card having multiple TCP/IP addresses. Microsoft refers to this capability as multihoming. Microsoft clustering uses many static TCP/IP addresses, not Dynamic Host Configuration Protocol (DHCP). In a typical configuration, a cluster uses five static TCP/IP addresses. Two addresses connect the cluster to your client network, two addresses are for cluster node to cluster node communications (intracluster communications), and one address is for cluster management.

Because you're using at least four network interfaces, five TCP/IP addresses seems reasonable. But you're just getting started. You will need an additional static TCP/IP address for each group of applications or services for which you want to provide failover capability. For example, if you want to set up a cluster to support four departments with Microsoft Exchange and SQL Server, you would typically create a virtual server for each department (that's four addresses), plus two more addresses for the virtual servers that run Microsoft Exchange and SQL Server. Dividing file, print, and application services into virtual servers lets you distribute the load across multiple physical servers. This division lets each department think it has its own server, but it also adds up to a lot of TCP/IP addresses.

Request for Comments (RFC) 1918 can help with this situation. The Internet Assigned Numbers Authority (IANA) has set aside several IP networks that are not routed on the Internet, so you're free to use these RFC 1918 addresses within your organization. (For more information about address classes as set forth by RFC 1918 and MSCS virtual servers, see "Planning for Implementing Microsoft Cluster Server," March 1998.) Microsoft notes that these addresses are useful for setting up your private network for intracluster communications. While that's true, you'll need to be sure that the addresses you assign for your private inter-cluster interfaces are on a different subnet then your public interfaces. If you don't you might end up with problems as your clients try to access your cluster and NT inadvertently tries to route data back to them over the private network. In ether case, you'll need to assign names and addresses to your cluster nodes before installing MSCS.

Drive Assignment Issues
NTS/E 4.0 stores drive assignments in each node's Registry. The Cluster Installation program replicates the shared cluster drive assignments of your initial node to its sister nodes. Start the assignments using letters near the end of the alphabet, such as the letter X. By starting your assignments from X and working back toward C, you have the freedom to add additional internal drives and CD-ROM drives to each cluster member without bumping into your shared cluster drive assignments.

Hardware
The equipment you use must be on the MSCS Hardware Compatibility List (HCL). Microsoft keeps an updated copy of the HCL on the Web at http://www.microsoft.com/hwtest/hcl. As Screen 3 shows, Microsoft breaks MSCS-related hardware into four categories: Cluster, Cluster/FibreChannel Adapter, Cluster/Raid, and Cluster/SCSI Adapter.

For your production systems, use only complete cluster configurations listed under the Cluster category. Don't assume that you assemble your own cluster by using parts from each of the cluster categories. To be validated, a cluster must be tested as a whole. The systems listed under the cluster category are the only systems that Microsoft Product Support Services considers supported configurations. Microsoft doesn't provide best effort support for clustered systems as it does for NT Server or NT Workstation. You don't want to spend thousands of dollars on NTS/E the hardware and software to go with it, and then tell your boss you can't get support. Unless you want to go thorough the validation process yourself, I can't stress enough the importance of abiding by the Cluster HCL.

Servers
Microsoft's minimum requirements for clustered servers are laughable and I've yet to see a validated configuration system that doesn't exceed them by a wide margin. Microsoft stipulates that your clustered servers must be PCI-based systems with 90MHz Pentium processors or better and at least 64MB of RAM. NTS/E and MSCS require 500MB of free disk space on the internal or system disk. This system disk can be IDE/EIDE- or SCSI-based.

Clustering uses a lot of PCI slots, so select a system with a large number of available PCI slots. The number of PCI slots you need depends on the number of networks and external storage buses your server supports and the number of multifunction cards you use.

Unlike most other NT high-availability solutions, MSCS supports Alpha processors. However, you can't mix processor architectures in the same cluster.

Except for processor architecture, your cluster systems don't need identical or symmetric hardware configurations. Asymmetric server hardware is fine for clustered servers providing file and print services, but unless you're willing to run at diminished performance, you'll need to size your servers' processors and memory to support the failover load and accommodate growth. Symmetric, or nearly symmetric, memory configurations are important when your cluster supports virtual servers for applications such as SQL Server 6.5, Enterprise Edition, SQL Server 7.0, Enterprise Edition, or Exchange Server 5.5, Enterprise Edition. In the case of Exchange, the Performance Optimizer determines optimum settings based on the first server that it's installed on hardware configuration. These settings include the locations of databases and log files and initial memory buffer allocations. Performance Optimizer saves these settings to the Registry and marks the settings as values to replicate to other cluster members. Although optimum locations of the shared data and log files in clustered environments don't vary (Exchange's Setup Wizard always places the shared data and log files on the shared disks allocated to the virtual server to which it is installed), the program bases the buffer allocations on the memory in the installation server. While the dynamic buffer allocation of Exchange 5.5 lessens this concern, if the memory in your systems differs greatly from server to server, these settings could impair performance on a failover.

Having noted the minimums, I want to make a few recommendations. Only use systems that are listed under the Cluster category of the Cluster HCL. If you don't, you won't be able to get support from Microsoft.

From the system disk space standpoint, use at least two 2GB SCSI hard disks for your system disks. The extra space gives you room for applications that install to the system partition and room for the uninstall directories that SP3 and later service packs can create.Why do I recommend SCSI over IDE or EIDE? NT works with IDE and EIDE, but some I/O optimizations and fault-tolerant features, such as cluster-remapping, are available with SCSI but aren't supported with IDE or EIDE.

Why do I recommend two disks? You're building a high-availability system, so spending a few more dollars for a second hard drive to mirror your internal system disk makes sense. You might want to go further and use a RAID controller for your internal system disks.

Network
To be validated by Microsoft and included in on the Cluster HCL, your servers need at least two PCI-based NICs or a multiported NIC, which can be Ethernet, Token Ring, or Fiber Distribution Data Interface (FDDI) cards. At least one NIC is used for the cluster interconnect unless you're using ServerNet or memory channel cards for the cluster interconnect, in which case you need only one NIC. If you forgo the redundancy of using this NIC to carry cluster heartbeat traffic, the NIC doesn't need to be PCI-based.

Early in my MSCS experiences, Jim Gray, manager of Microsoft's Bay Area Research Center (BARC), pointed out that this dual NIC configuration is flawed. Although the configuration allows for redundancy in intracluster communications, the configuration has a single point of failure from the client access standpoint. Regrettably his point has been proved to me not only in a lab but also in production environments. You can drive this point home to yourself by unplugging the public network cable on one cluster node. The nodes will still talk to each other over the private network and no failover will occurred. The problem is that the clients won't be able to communicate with the "publicly challenged" cluster node. This demonstrates the need for fault-tolerant NICs for the public network.

I know two possible sources for fault-tolerant NICs, but I haven't tested the NICs extensively and they're not included on the Cluster HCL. Adaptec supports their Duralink Failover solution over their single-, dual-, and quad-port (Quartet) PCI Fast Ethernet adapters. Intel's Adapter Fault Tolerance (AFT) is available for the company's EtherExpress PRO/100 Server and PRO/100+ adapters. With both solutions, you configure one interface as primary and one as backup. The software watches the availability of the link to the primary interface and fails over TCP/IP and media access control (MAC) address information to the secondary interface if necessary. Both products have integrated Simple Network Management Protocol (SNMP) support that helps monitor general information, port identification, port status, and statistics.

Adaptec's Duralink Port Aggregation software provides the same fault-tolerant capability as the company's Duralink Failover product, except that the Port Aggregation software doesn't leave one interface inactive while waiting for the other interface to fail. Port Aggregation uses all available ports, which improves your throughput. Adaptec states that its Port Aggregation software provides a theoretical throughput of 1.2GBps.

You might think an alternative is to connect two NICs to the public network and enable the NICs for cluster and client traffic. However, MSCS setup seems to only identify only one NIC per subnet. You might be able to fake it and use both interfaces, but this method doesn't give you the true fault tolerance that the Adaptec or Intel solutions provide.

Shared Storage
Because your system and shared storage must exist on different buses, you need at least one more controller, or a multichannel controller, to support your external shared storage. This controller can be a SCSI, Differential SCSI, or Fibre Channel controller. Keep in mind that Microsoft noted at the last Windows NT Magazine Professionals Conference that it plans to only support the clustering of more then two nodes via Fibre Channel controllers. If you use SCSI, you can't mix single-ended and differential devices without a converter. Microsoft recommends that these cards be the same (same vendor, model number, and BIOS revision) in each cluster node. You can also use a dual-ported card. In fact, some vendors are making combination Ethernet and SCSI cards. But remember, with SCSI and Ethernet going through one card, your system has a single point of failure.

You need an external storage array to hold the shared cluster disks. Because high availability is the goal, you'll probably want to go with a storage array with a SCSI-to-SCSI (or fibre channel-to-SCSI) controller that does hardware-level RAID. Microsoft refers to these as Hardware (HW) RAID boxes. These array based RAID boxes are a departure from the host-based RAID adapters that you probably use in your NT servers. Both HP and Dell are using host-based American MegaRAID adapters in some of their cluster configurations but there are valid reasons for using array based RAID controllers. The major advantage to using array based RAID controllers is write-back caching can be enabled. With Microsoft Cluster Server, the caches on host based RAID controllers have to be configured to operate in write-thorough mode just as they do on other high-availability solutions like Compaq's Online Recovery Server. This is a requirement since the failure of a Cluster Node with a write-back enabled controller would result in cache data not being written to disk. Battery-backed-up controllers actually make the situation worse, since the failed node will write its cache to disk on restart. This isn't an issue with array based RAID since the nodes share the cache on the controller in the array. To increase availability, look for arrays that contain dual in-array controllers with battery-backed-up Error-Correcting Code (ECC) and synced caches.

Shared SCSI Bus Configuration
The external SCSI bus is shared between MSCS systems, so the setup differs from a typical SCSI configuration. In this multi-initiator cluster configuration you need to plan for the live insertion and removal of devices. This is facilitated by disabling the internal termination on your SCSI controllers and using external termination on the shared bus. The use of external termination via Y-cables, trilink connectors, or isolation/termination devices like American Megatrends' Cluster Enabler products lets you remove a cluster node from the shared bus and maintain termination on the bus. The remaining node continues to function, which enhances the cluster's maintainability.

You can't use the default SCSI IDs for the controllers because both controllers share the same bus. I recommend ID 6 on the first node and ID 7 on the second node. You need to disable the boot-time SCSI reset operation because it wouldn't make sense for one node to reset the bus as it boots while the other node is attempting to use the bus. Microsoft notes that you must disable the BIOSs of some SCSI controllers or the computer won't boot.

Installing NTS/E
Now that you have assembled your hardware and made your planning and administrative decisions, you need to install NTS/E. The NTS/E Installer guides you through the installation process. The Installer first prompts you to update the system to SP3 for NT 4.0. (SP3 is required for NTS/E.) Screen 4 shows the standard service pack installation options. Don't waste time creating the Uninstall directory because NTS/E requires SP3. Reboot the server when you finish the installation.

After you install SP3, you must reapply it each time you add new software or hardware components to your configuration. I recommend that you copy SP3 to each server so you can find SP3 when you need it.

Fix System and CD-ROM Drive IDs
NT's ability to change drive IDs is too flexible when using MSCS. The system reassigns drive letters at startup based on the order in which it discovers disks. The system assigns primary partitions first, followed by secondary partitions. If you don't fix drive assignments, they change when you add or delete controllers, disks, or adapter drivers. This flexibility isn't acceptable when you run MSCS because MSCS depends on drive letter assignments stored in the NT Registry.

As Screen 5 shows, one solution is to use the Disk Administrator to assign the system drive ID to C and the CD-ROM drive to D after installing NTS/E. Assigning these drives fixes the drive IDs in the Registry and prevents the IDs from changing as you bring your shared SCSI bus online. To make these drive assignments, boot NT on both nodes and log on with an administrative account. Select Exit from the NTS/E Installer menu. Now run the Disk Administrator from the Administrative Tools menu and make your drive assignments. You must assign another letter (E, for example) to the system drive and reboot before you can fix it at C. These steps might seem unnecessary, but the steps are important if you want to make sure your C drive doesn't move around.

Create a Service Account on the Domain. The MSCS service must run under an account on the domain. I've found that naming these service accounts after their service name as it appears in the Control Panel followed or preceded by the word service helps me stay organized. For MSCS, the account name would be ClusterService. Create this account on the domain and set its password properties to Password never expires. The account doesn't need any other settings. The MSCS installation adds the account to the local Administrators group.

Confirm Node Connectivity. You need to test Microsoft TCP/IP connectivity before you install MSCS. Start the test by trying to ping Node B's TCP/IP addresses from Node A. After you confirm TCP/IP connectivity, you need to test NetBIOS name resolution over TCP/IP using the NBTSTAT program. You test name resolution by entering

NBTSTAT ­a NodeName

at a command prompt. If these tests succeed, you're ready to install MSCS on your first node.

Installing MSCS
You've finished most of the preliminary work, and you're down to the final steps before installing the cluster server software. Turn on your shared SCSI array and both nodes and stop them at the OS Loader screen by pressing the spacebar.

Assign drive IDs on Node A for shared cluster disks. MSCS makes all drive assignments permanent during setup, so you need to fix your drive assignments on your shared disks and NTFS format them before running MSCS Setup. To fix your drive assignments, boot NTS/E on Node A and log on with an administrative account. Select Exit when the Installer menu appears. Invoke Disk Administrator from Administrative Tools, just as you did for the internal drive assignment. Because you turned on the shared array, you'll see your internal drives and the drives on your array. Now you can create partitions, assign drive letters, and NTFS format the shared drives. MSCS only fails over at the physical disk level, so you don't benefit by putting multiple partitions on a disk. Start the drive assignments at X and work back toward your internal disk and CD-ROM identifiers. The MSCS installation copies this information to cluster members, so you need to make the drive assignments only once. However, you must run Disk Administrator on both cluster nodes to assign IDs for drives you install after you install MSCS.

Install MSCS on Node A. To install MSCS on Node A, invoke the NTS/E Installer by selecting Start, Programs, Administrative Tools, Enterprise Edition Installer. On the NTS/E Installer screen, select MSCS, load the NTS/E Component CD-ROM, and click Start Installation. Screen 6 shows the welcome dialog for MSCS installation. Exit any open applications and select Next.

Screen 7 shows the hardware compatibility dialog box. You must click I Agree to indicate that your hardware is MSCS-compatible, and then click Next. Now you have the option to form or join a cluster or install the Cluster Administrator. Because this is your cluster's first node, click Form a New Cluster under Select the operation to perform, and click Next.

MSCS Setup now prompts you for the cluster name. This name is used to create a MSCS network name resource that serves as the NetBIOS name for the cluster, so the name is limited to 15 characters. The network name resource becomes part of a default MSCS group named Cluster Group, and Microsoft recommends that you use this name to administer the cluster. You can change the cluster name later with the Cluster Administrator program.

In the Cluster Service account dialog box, which Screen 8 shows, enter the account name you created (ClusterService in this case), and the account's password and domain. (You can change the account name later by changing its startup properties in the Control Panel). Click Yes at the next screen to add the account to the Administrators group.

Setup identifies all the SCSI disks as shared cluster disks unless they are attached to the bus that your system disk is attached to. Setup doesn't discriminate between disks connected to your shared SCSI bus and those connected to another (nonboot) SCSI controller in your system. Screen 9 shows how you must manually identify the disks that are not on the shared SCSI bus and therefore not managed by MSCS. Setup creates default Physical Disk resources and groups for each shared disk for use in the Cluster Administrator utility.

Setup now prompts you to select the disk where you'll store permanent cluster files. The disk must be on the shared storage bus and is called the quorum resource or quorum disk. The quorum disk is important because MSCS uses the disk to determine if another server is up when the nodes can't communicate over their interconnects. As with all shared disk resources, only one node at a time can control the quorum disk. The quorum disk contains the cluster configuration change log. This log stores changes to the Cluster's Configuration Database so they can be communicated to offline cluster members. if a cluster member is down. The log is in the \MSCS directory, which contains files that maintain the persistent state information for the Cluster Manager. This information includes the cluster event log and Registry checkpoints. The cluster will stop functioning if the quorum disk becomes unavailable so it is important to use a fault tolerance device.Next, you must identify your cluster's available network interfaces or adapters, specify their configuration settings, and give them a descriptive network names. You must also decide which type of cluster communication to assign to each network adapter. As Screen 10 shows, the choices are intercluster communications only, client-to-cluster communications only, and intercluster and client communications.

You can use only one network adapter at a time for internal cluster communication. If you enable multiple adapters for internal cluster communication, you must prioritize the available networks by selecting your preferred network adapter and moving it to the top of list.

The installation procedure warns you that single-adapter configurations represent a single point of failure, as Screen 11 shows. This dialog box also appears if you have two adapters with their primary TCP/IP addresses on the same network.

Screen 12 shows how the Setup program prompts you for the IP address, subnet mask, and network you'll use to administer the cluster. Use the values from the cluster planning that you performed earlier, then select the network over which you'll administer the cluster. Click Next to go to the final screen of the installation process, and select Finish. When you click Finish, MSCS Setup will:

  • Copy the necessary files
  • Set clusdisk parameters
  • Update the cluster network configuration
  • Set MSCS security
  • Synchronize NT Registry data
  • Update network providers
  • Start the Cluster Server disk driver
  • Start the Cluster Server service

Installing MSCS on Node B. Installing MSCS on your next cluster node is easier than installing it on your first node. You simply type the name of the cluster you're joining and the password for the Cluster Service account. The new node automatically learns the cluster configuration information from Node A.

Confirm cluster network and SCSI connectivity. Now that the cluster is running on Node A, it's a good idea to test connectivity from Node B to the cluster address and name. Use ping and NBTSTAT as you did when you tested the connectivity between nodes before you installed MSCS. Next, start Disk Administrator to confirm that Node B sees the shared storage bus. Disk Administrator, as you see in Screen 13, shows the disks on the shared bus as offline. If you don't see the disks, check your connections. If you do see the disks, you're ready to add Node B to the cluster.

The NTS/E Installer is nothing more than a front end for individual setup programs, so it's easiest to exit NTS/E Installer and run the Cluster Setup program directly. Browse to the location of the MSCS files on the component CD-ROM and invoke setup.exe. Select Next at the welcome screen. Click I Agree to indicate that your hardware is compatible.

As Screen 14 shows, click Join an existing cluster, then enter the name of the cluster you formed with Node A. As before, I recommend accepting the default location for the MSCS files when you see the dialog box that Screen 15 shows.

Enter the password of the domain account the Cluster Service runs under. If you're installing on a member server, Setup will ask if you want to make this account a member of the local Administrators group. Click Next in the dialog box Screen 16 shows to go to the next to last window. Click Finish to complete the installation. If everything went well, you'll see a dialog box announcing the successful installation of MSCS on Node B and the need to reboot.

Uninstalling the Cluster Software
MSCS is a well-behaved NT application, and you can uninstall it from Control Panel, Add/Remove Programs applet. You must reboot the node to complete the removal. Keep in mind that just uninstalling a node won't allow it to be reinstalled. You must use Cluster Administrator to evict the node before it, or any other node, can rejoin the cluster.

Cluster Administrator Program
To manage your cluster remotely, you can install Cluster Administrator on any NT 4.0 system running SP3. On systems running NTS/E, you install Cluster Administrator by following the directions in the MSCS installation program. Non-NTS/E systems still use the setup program for installation, but these systems only give you the option to install Cluster Administrator, not the form or join a cluster options. Once you install the administrator program on an Intel or Alpha platform, the program administers clusters running on either platform.

Moving On
I hope you agree that installing MSCS isn't painful. Now you're ready to administer your cluster, if it's a test environment. If you are building a production cluster and you want to receive support, you must validate your configuration. Both of these processes are more interesting than clicking dialog boxes, as you did to install MSCS.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish