Configuring a database system with high writes requires some different techniques than configuring a traditional...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
SQL Server system. Sometimes simply working with the SAN cache will do the trick and yield the best performance. But what about those times when RAID 10 is a requirement? SQL Server MVP Denny Cherry walks you through the steps for setting up a high write database system, including RAID array, cache configuration and file disk layout.
After publishing my tip about disk storage and tuning SQL Server performance via disk arrays and disk partitioning, I received a few comments about how RAID 10 was better for databases than RAID 5. The point was made that RAID 10 supports more writes per disk than RAID 5 because RAID 5 requires calculating the parity bit. When you look at the raw numbers between RAID 5 and RAID 10, that's right: RAID 10 will handle more writes than a RAID 5 array.
But when working in a large enterprise environment you are probably working with SANs and a lot of SAN cache. You can often work with that SAN cache to get the best possible performance out of your database writes by writing to the cache and having the SAN later flush to disk.
However, there is a portion of databases out there for which even this advanced
SAN configuration for SQL Server isn't enough to keep up with the writes – and where RAID 10 is a requirement. While these systems are not the norm, they are becoming more prevalent.
Most databases have a record written once and then read, and many times the data is written once but rarely read until it is archived or deleted. These systems require a much more robust disk subsystem than your average database system.
RAID array layout
When designing high write database systems, you should look to some of the same basic techniques as a normal database server – with a few tweaks. You will still want to put your database, indexes, logs and tempdb database on separate physical disks from each other. The big difference here is that you want to configure all your RAID arrays as RAID 10, instead of putting your database and indexes on RAID 5 arrays.
You will still want to configure at least one data file for database files per 4 CPU cores. You may wish to consider having one data file per 2 CPU cores. Your index file group should be set up in the same way as your data file group, and the tempdb should have at least one data file per 2 CPU cores.
When dealing with these high write database systems, pay special attention to the cache settings on the LUNs or JBOD RAID arrays. When your system is mostly write, your read cache is mostly unnecessary. If possible, you should disable – or at the minimum, change – the cache settings to somewhere around 10/90 read/write or 20/80 read/write. This optimizes the amount of data the disk subsystem will accept before it has to be written directly to the disk because the cache is full.
If your disk storage subsystem gives you the option, adjust the high and low watermarks that tell the disk subsystem how much data can get into the write buffer before it starts writing this data to the disk.
By adjusting these settings, using an EMC CLARiiON as an example system, we can see how much additional cache we can have for our LUN:
|Default Setup||Adjusted Setup|
|Cache Ratio (R/W)||50/50||10/90|
|Total SP Cache||3 Gigs||3 Gigs|
|Dedicated To LUN||300 Megs||300 Megs|
|Size of Read Cache||150 Megs||30 Megs|
|Size of Write Cache||150 Megs||270 Megs|
With this small setting change, you have greatly increased the amount of data that can be written to the write cache before it fills. This allows SQL Server a much greater amount of writing (~40% more writing) that can be done at high speed before having to slow to allow the storage to clear the cache.
File to disk layout
When laying out physical files, you'll want to structure one physical file per RAID array. When you start working with this configuration, you can quickly run out of drive letters. When that happens, use mount points to present the additional disks to the system. Mount points are disk volumes that are mounted as folders on other physical disks. You access the data through the same drive letter, but the disks are independent of each other and have separate I/O capacity, which allows you to host more than 26 physical drives on a single server.
While the basic techniques are similar, configuring a very large high write database does require some different techniques than a normal SQL Server system.
ABOUT THE AUTHOR
Denny Cherry has over a decade of experience managing SQL Server, including MySpace.com's over 175-million-user installation, one of the largest in the world. Denny's areas of expertise include system architecture, performance tuning, replication and troubleshooting. He currently holds several Microsoft certifications related to SQL Server and is a Microsoft MVP.
Do you have a comment on this tip? Let us know.