This information is excerpted from Chapter 3, 'Performing your SQL Server consolidation,' of our original expert
e-book, "Consolidate SQL Servers for availability, scalability and cost savings.
Once you inventory your SQL Server dependencies and analyze candidate servers, you should be in a position to develop a testing environment.
Your SQL Server testing environment should be adequately sized to handle a representative load of each wave of user databases to be consolidated on all SQL Server instances running on that machine. To adequately size a server to support an unknown load or an unknown number of SQL Server instances, the server should be a top-of-the-line, 64-bit, multiprocessor machine (4 to 16 processors) with significant RAM (8 to 16 GB), and be connected to a large storage area network (SAN).
Your testing environment should also be able to support the wave that comprises the largest user environment, which may be several clusters. It must also have enough disk space capacity to accommodate the maximum number of users on the database per consolidation phase on the server and system databases. (This could number in the thousands of databases per server.) The system database will be much larger in the consolidation environment than in individual environments. Tempdb will be significantly larger on servers running SQL Server 2005, especially if you are using database snapshots or the snapshot isolation level. Here are some factors to consider for your test environment, as well as your final production environment.
Factors to consider for your test environment:
Storage Area Networks (sans)
SANs offer superior disk performance, improved reliability and recoverability, and clustering support. They are also necessary when working with larger databases (more than 100 GB), or you can expect to waste a great deal of time moving data from one system to another. SAN vendors provide SNAP functionality to mirror your data in seconds, carving out another copy of your database for use elsewhere. In your testing and production environments, the size of your SAN cache should be larger than the maximum data transfer during a checkpoint for optimal performance.
Disk and array considerations
To get optimal performance in both your test and production environments, use SCSI drives with the highest rotation speed (15 Krpm) and largest number of spindles. While SATA drives offer some of the per-formance of SCSI drives, they can become saturated during checkpoints. However, a cluster file system allows customers to stripe data across drives to increase performance for certain load patterns and use smaller more cost-effective arrays. iSCSI networks and arrays with SATA drives are offering a whole new category of cost- performance-features to a certain group of customers. Place write-intensive databases and Tempdb on RAID 10 arrays, and read-intensive drives on RAID 5 drives. RAID 0 may offer some cost and performance benefits, but it provides no fault tolerance, and the cost of downtime quickly outweighs any cost savings. Put the log files on different disks or even controllers to minimize contention.
Use the following performance monitor counters to monitor the performance of your disks:
- SQL Server: buffer manager — page writes/sec
- SQL Server: buffer manager — page reads/sec
- PhysicalDisk: avg. disk queue length (should be under 2)
In SQL Server 2005, execution and I/O-related dynamic management views can be helpful in capacity planning.
A consolidated SQL Server solution will support larger numbers of users than departmental SQL Servers. In order to offer them optimal performance, you will need multiple fast processors. Your test and production environments should be multi-processor machines running 64-bit processors. In most consolidation test cases outlined by Microsoft, the machine of choice was either an 8-way or 16-way processor on a 64-bit machine. However, many companies are finding today's 4-way dual-core machines offer a sweet spot for price-performance.
Both SQL Server 2005 Enterprise and Standard Edition take advantage of all physical memory. Using multiple instances of SQL Server will minimize the memory pressures, but you will need to configure fixed memory management as opposed to dynamic memory management. You will also need to fix maximum memory per instance. To configure how SQL Server uses memory, use sp_configure (i.e., sp_configure 'min server memory (MB)', '2000' and sp_configure 'maxn server memory (MB)', '16000').
As the loads on consolidated SQL Servers will be greater than on departmental SQL Servers, you will need a high-speed network and LAN card with fixed settings appropriate to your network. Using network teaming will increase overall network throughput.
ABOUT THE AUTHOR: Hilary Cotter, SQL Server MVP, has been involved in IT for more than 20 years as a Web and database consultant, and has worked with SQL Server for 11 years. Cotter is Director of Text Mining at RelevantNoise, dedicated to indexing blogs for business intelligence. Microsoft first awarded Cotter the Microsoft SQL Server MVP award in 2001. Cotter received his bachelor of applied science degree in mechanical engineering from the University of Toronto and studied economics at the University of Calgary and computer science at UC Berkeley. He is author of the book A Guide to SQL Server 2000 Transactional and Snapshot Replication and is currently working on books devoted to merge replication and Microsoft search technologies. Hilary Cotter can be contacted at email@example.com.