Chapter 2: SQL ServerMaintenance and Troubleshooting <<previous|next>> :Don't use backup and restore to distribute database tools
Microsoft SQL Server Performance Monitoring and Tuning
Checklist: Maximize SQL Server backup performance
By Greg Robidoux, Edgewood Solutions
The following steps will help you improve the performance of your SQL Server backups. Not every option will be right for your environment, but using as many of these best practices as possible will significantly improve your backup performance.
Increase I/O throughput
SQL Server backups are an I/O-intensive operation. To put it simply, you need to first read and then write the entire database contents. Here are several steps to improve your I/O throughput:
Write to as many disks as possible
Spread the workload across as many disk drives as possible, which could be either a RAID array made up of many drives or multiple RAID arrays.
Read from as many disks as possible
If your database is spread across several disks, you can reduce I/O as the bottleneck.
Use separate disks for reading and writing
Reading and writing to different sets of disks will help in the I/O throughput.
Use the fastest RAID configuration possible for reading and writing. For instance, RAID 5 is much slower because it requires additional writes to disks. RAID 0 is fastest for writing, but it does not offer redundancy. Consider using RAID 1+0: The 1 and 0 signify mirroring (1) and striping (0), so you'll have a mirrored set of data and you can stripe across several mirrored pairs for additional I/0 throughput. Since backups primarily read from the database to write out the backup file, the write advantage will be noticeable on disks storing backup files.
Use different controllers and/or channels to increase I/O throughput. Also, use the best controllers you can afford. When purchasing a controller, you should look for number of ports, maximum number of drives supported, cache size, battery backup and SCSI protocols supported. Separate the read and write activity across different controllers or channels.
Write data locally
Write to disks that are locally attached instead of writing to network-attached storage. If the data is being written to direct-attached storage, you can eliminate other factors outside of the server that may slow down the backup time.
Write to multiple devices
In addition to reading and writing to multiple disks, use as many threads as possible to increase throughput. This can be done either by using Enterprise Manager and selecting multiple output files or by using T-SQL and including multiple files in the backup command. You can also write to different disk subsystems to further increase throughput.
Use third-party tools to compress your backup output. By doing so you can reduce the overall size of the file that is written by up to 90% and cut the backup time in half. Since most data in a database is text data, it is highly compressible.
Write backups to disk
Always write your backups to disk rather than directly to some other type of media; after writing directly to memory, disk backups are the next fastest option.
Fast disk drives
Use the fastest drives that you can get. SCSI drives are still faster then IDE and reach speeds up to 15 K RPM.
Using the appropriate database configuration options along with the appropriate backup methods will improve performance.
Match your backup model to your recovery model
Make sure the type of backups you have running match the recovery model settings for your database. I have seen many cases where the full recovery is set, but only full backups are issued. In these instances, the transaction log is far larger then the database file but still gets backed up with each backup.
Use Bulk Logged recovery model
If you are doing a lot of bulk inserts into your database and can recreate the data if necessary, save transaction log backup time by using the Bulk Logged recovery model. This option does not log every single bulk load operation, making the contents of the transaction log much smaller and your transaction log backups run faster.
Use differential backups
If you have very large databases but not much of the database is changing throughout the week, use differential backups to reduce the time it takes to run your backups. With this approach, you can run differential backups each night of the week and full backups on the weekends.
Use transaction backups
A different approach would be to only run transaction log backups each day and then run full backups weekly. This will also reduce the time and resources required for your backups.
File and filegroup backups
File or filegroup backups allow you to back up part of the database versus the entire database each time. If you have five filegroups, you could back up a different file group each night and do a complete backup at the end of the week. You also need to run transaction log backups in order to restore filegroups.
A much faster approach is the snapshot backup, which allows you to take a quick snapshot of the database and place the file somewhere else for backup. This is usually offered with high-end hardware solutions.
Use a third-party backup tool that offers compression, like those from Idera, Quest Software and Red Gate Software; or continuous backup capabilities, like TimeSpring Software's data protection software. There are also clustering products that allow you to mirror your entire database on another system without any backups, which increases your ability to keep the system up and running. But it does not offer long-time data archiving.
Transaction log size
Keep your transaction log size in check. You can use DBCC SQLPERF(LOGSPACE) to see how much space is actually being used. And DBCC SHRINKFILE reduces the size of the transaction log
Back up during off hours and low usage time
Schedule your full and differential backups to occur during low usage times. The more resources that SQL Server can provide to the backup, the faster it can complete it.
Don't run all backups at the same time
If you have multiple databases on a server or if multiple servers are using the same disk subsystem, schedule backups to occur at different times. If you have a large environment with a lot of backup jobs, using tools from sqlSentry or Idera can help you better manage your backup schedule.
To increase your backup performance, you have some simple options and some that require you to purchase third-party tools or reconfigure the hardware you currently have. In any event, if you are having backup performance issues, try following some of the suggestions here to relieve your bottlenecks.
ABOUT THE AUTHOR:
Greg Robidoux is the president and founder of Edgewood Solutions LLC, a technology services company delivering professional services and product solutions for Microsoft SQL Server. He has authored numerous articles and has delivered presentations at regional SQL Server users' groups and national SQL Server events. Greg, who also serves as the SearchSQLServer.com Backup and Recovery expert, welcomes your questions.
Copyright 2006 TechTarget
More information from SearchSQLServer.com
- Tip: Restore SQL Server using Enterprise Manager
- Step-by-Step Guide: How to spec your SQL Server hardware needs
- Topic: Get more best
practices for SQL Server backup and recovery
06 Feb 2006
Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.