Clustering isn't as much about the front-end of an application as the back-end. There are precious few applications that will care whether your database lives on a cluster or not. In fact, applications by and large shouldn't care. What matters more is the result of a server meltdown, or when you really have a lot of users asking for a lot of data at one time. That's where clustering comes into play. There are normally two types of clustering: Load balancing and Failover. This short article from
Load Balancing Cluster
In a true load-balancing cluster, all servers (called "nodes") act as one. The single group of computers is called a quorum. A computer or software service called the "quorum manager" creates the illusion of a single server to the outside world. The quorum manager passes processing requests off to one or more server(s). This sharing of work produces a very powerful virtual computer. If one of the nodes leaves the cluster, the quorum manager just hands the work to another server.
While this type of clustering sounds pretty cool, SQL Server doesn't do it (yet). Sorry about that, it just doesn't! It's OK, though, read on.
Failover clustering is fairly easy to set up, and provides high safety for your environment. In failover clustering, there are two servers with a single storage system between them. The servers establish a signal between them that acts as a heartbeat and should one not hear the other, the second node takes over the identity of the first. All of this is pretty seamless to the consumer of the data. Depending on how tolerant the application is to connection timeouts, your users may not even notice!
SQL Server 7 and 2000 both handle this type of cluster. As a matter of fact, SQL Server has two modes of this type of cluster: Active/Active and Active/Passive. Let's take a look.
In this type of failover cluster, each server acts on its own, and can also handle the other server's failure. Let's say you have two servers (nodes) in the cluster named ServerA and ServerB. The cluster itself is known by another name, say Cluster1. Users can address Cluster1 or ServerB. Should ServerA go belly-up, ServerB becomes Cluster1 and ServerB. It just chugs along with two identities.
An Active/Active cluster can have four nodes participating, and up to 16 instances. We'll cover instances in another tutorial in greater depth, but an instance can be thought of as another installation of SQL Server running on a system. In ODBC connections and other connection types, it is addressed by the name of the server and then the name of the instance, like this: SERVERNAME\INSTANCENAME instead of just the SERVERNAME that you're used to using.
In Active/Passive, the Cluster is the only box known to the outside world. The others "stand by" and are activated only if you do it manually or the first box fails.
You may think that the only reason to cluster is for safety. After all, I've already said that SQL Server doesn't do true load-balancing, so why else would you do it? Other than the obvious high-availability benefits, you can also manually fail over a system to the second node, upgrade a service pack on the first node, fail it back over, and then upgrade the second. This might not sound very important, but if you're in a shop that doesn't allow any downtime, it's priceless.
No matter what configurations of failover clustering you choose, there are some caveats. First, you need the Enterprise version of SQL Server. You can also do it with a development edition, but we'll stay with the commercial versions for this tutorial.
You're going to need Microsoft Clustering Services (MSCS) installed. That's another whole topic outside the scope of this tutorial, but here's the link for Microsoft's clustering home page to help you get started with that. The long and short of the OS requirement is that you need Windows 2000 Enterprise or Datacenter edition, or the equivalent Windows 2003 Server installs. Windows 2003 Server changes things quite a bit, so make sure you bone up on that site if you're planning to run your cluster on that operating system.
You'll need two network cards in each server. Now technically this isn't really a requirement, but since the servers are going to need a heartbeat between them it's best if they don't have to share that resource. Next, you're going to need a disk to share between the servers. Microsoft calls this a "shared SCSI bus". Most people use a SAN here, but even if you've never read the Hardware Compatibility List before, you'll have to follow it now. Microsoft has been moving that link around a bit, so make sure you search for Cluster and Hardware Compatibility List if that link doesn't work. Your hard drive must be on here or don't expect everything to go well.
You can read more about the steps needed to install a cluster at Microsoft's Support Center.
Read more about administering SQL Server at InformIT's SQL Server Reference Guide.
This was first published in March 2005