Top 5 SQL Server DBA tasks that are a waste of time

While some DBA tasks are less important than most, others can be completely pointless. Could you be wasting valuable time without even knowing it?

This article can also be found in the Premium Editorial Download: SQL Server Insider: Is self-service BI the answer?:

Some tasks performed by DBAs on a regular basis not only have little to no benefit to SQL Server, but can actually be detrimental to the health of their environments. In this article I'll cover a few such tasks. If you are doing any of these, I would hope that you stop doing them as soon as possible.

#1. Shrinking the database

Daily shrinking of the database is bad for a few reasons. From the technical side, the biggest impact you will see is a greatly increased index fragmentation after each database shrink. In addition, shrinking the database files increases both the physical file fragmentation on the disk subsystem and the I/O load on the server, which decreases performance for other functions while the shrink operation is running.

Now it isn't the actual shrinking of the database that causes fragmentation, but as the files regrow themselves and you continue to shrink them, the database will become more and more fragmented as it autogrows.

SQL Server Mailbag

Each month (nearly), Denny Cherry asnwers questions from our readers dealing with a variety of SQL Server topics.

Recent editions include:

Dec. 2009: Asymmetric encryption, log shipping issues

Oct. 2009: CALs, witnesses and unwanted changes

Sept. 2009: Data restoration and DB property management

If you shrink the log file, there is also the bad side effect of having to regrow it. When the log file fills and needs to autogrow, all operations in the database are paused as the transaction log grows. This could take a second or two on a very busy system, causing all sorts of locking and blocking as processes wait for the transaction log to grow.

The other downside is that when the database maintenance begins to run again, the files will need to grow, which takes CPU and disk resources to complete. This then causes the database maintenance to take even longer, especially on SQL Server 2000 and older, or on SQL Server 2005 systems and up that do not have the instant file initialization setting enabled.

From a management side, this may give you a false sense of security since you don't know how much space your database actually needs to take up. In other words, if your database grows from 100 GBs to 130 GBs every time you run the database maintenance process on it, and then you shrink it back down to 100 GBs again, you will have no idea how much space the database actually needs. Does it need 100 GBs or 130 GBs? The answer is that it needs 130 GBs of space so that it can perform the needed database maintenance. If you shrink it down and then put other data on the disk, you may not have enough space to perform your database maintenance and the job will fail.

#2. Truncating the transaction log

One of the more common setups I see online is the following database maintenance schedule:

Log Backup
Index Rebuild
Full Backup
Truncate Log
Log Backups every 30 minutes

What is actually being done here is the indexes are being rebuilt, and a full backup is being taken. So far so good, right? The log is then truncated which breaks the log chain -- making all log backups taken after this useless until the next full backup is taken. This is because the Log Sequence Number (LSN) chain is being broken by the truncate log step.

Whenever a transaction occurs, an LSN is written to the transaction log. When a backup is taken, the first and last LSN included in the backup are written to the header of the log backup. When the logs are restored, the LSNs from one log backup to the next must be contiguous. If they are not contiguous, then SQL Server knows that log records are missing and the log backups cannot be restored.

In this scenario, the full backup can be restored to the database. Unfortunately, the log backups that are being taken are useless. This is because the last LSN included in the transaction log backup will not be the same as the LSN from the first truncation log backup taken after the log is truncated, since the truncate log command changes the LSN number of the log.

Another scenario that I see quite often is to truncate the log, then perform the full backup. This is better, but not by much. Any transactions between the truncate statement and the next full backup can't be recovered if the full backup is corrupt. Why? Because you can't restore the full from two days prior and then roll all the logs forward since the truncate log step will still be resetting the LSN numbers. And yes, switching the log into simple recovery mode does the exact same thing.

If you are truncating your transaction log so that you can shrink it, then please scroll up and reread the section above.

Now if you don't need the transaction log intact, but have the database in full recovery, then you should change the database into simple recovery mode. This way the transaction log will not grow since log entries will be overwritten instead of kept until the next log backup.

#3. Restoring a full backup to the log shipping target

This is one that hopefully you aren't doing on a daily basis. The first sign that log shipping was setup by someone who doesn't fully understand how the transaction log works is that the log shipping configuration is setup to restore the full backup to the log shipping target server daily or weekly. This is a waste of time because the log shipping target already has all the transactions applied to it, so restoring the full backup is just a waste of time and bandwidth if your log shipping target is in another office or data center.

When you backup the transaction log, everything that has happened to the database since the last log backup is included. This includes new columns and tables, index rebuilds, etc. By restoring the full backup to catch up on anything that is missed, you are simply dropping the destination database and restoring it to the exact same state, then applying all the logs forward that were backed up while the full backup was being restored. All this is does is increase the chance that a log backup will be missed.

#4. Defragging then rebuilding your indexes

As you (hopefully) already know, there are two ways to cleanup your database indexes. You can defragment the index using the REORG parameter, or you can do a full rebuild of the index. With the new database maintenance plans in SQL Server 2005, however, it becomes very easy to do both.

While this won't specifically hurt the database, it is a major waste of time (not the database maintenance, but performing both operations against the same index). This is because the end result from both operations is the same thing -- an index which isn't fragmented and has the proper fill factor set for all the database pages.

If you frequently perform an index reorganization followed by an index rebuild, then CPU power and disk I/O that you spend doing the reorg is wasted since the index will be completely rebuilt by the index rebuild command. You should do one or the other -- not both. If you aren't sure which one to use, there are plenty of products you can purchase to handle this automatically (Quest's Capacity Manager or Idera's SQL defrag manager, for example), or you can find some of the free scripts available online.

#5. Manually reading through error logs

Many DBAs in smaller shops will take the time to read through error logs daily to look for problems. When you only have one or two servers to deal with, this doesn't take very long. When you start adding in more and more SQL Servers, however, going through these log files manually can start to take a very long time.

You'd be better off coming up with an automated way to read these log files and look for error logs. This can save you a lot of time, especially as the log files grow, leaving you available to work on projects which can add more to the company's bottom line.

If you have a monitoring solution in place, it probably has a way to read the application log. Any critical error in the ErrorLog file will also be written to the Windows application log. If you don't have any sort of monitoring application, or if it doesn't support reading the error log, you can load the ErrorLog file and/or application log into a table and look for errors.

Remember, while there are lots of daily tasks which can add value to your organization, there are others that not only add no value to the business and/or SQL Server, but may in fact be detracting from the bottom line. It is a good idea to step back and look at each of these tasks to evaluate what they are actually doing, and see if these tasks are providing an actual cost benefit (backups, for example) or not (manually reading through log files).

ABOUT THE AUTHOR

Denny Cherry has over a decade of experience managing SQL Server, including MySpace.com's over 175-million-user installation, one of the largest in the world. Denny's areas of expertise include system architecture, performance tuning, replication and troubleshooting. He currently holds several Microsoft certifications related to SQL Server and is a Microsoft MVP.

This was first published in December 2009

Dig deeper on Microsoft SQL Server Performance Monitoring and Tuning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close