Problem solve Get help with specific problems with your technologies, process and projects.

Monitoring server disk space in SQL Server

Monitoring your SQL Server database files is an essential operation of all DBAs. Avoid downtime and data loss with the three methods described here to monitor auto-growths of database files.

  DBAs must monitor their SQL Servers carefully to ensure that the system and user databases have sufficient disk...

space for the life time of the applications using these SQL Servers. Failure to do so can result in the following problems:

  • All SQL Server jobs fail – if the msdb database does not have room to grow, job information will be unable to be logged and all of your SQL Server jobs will be unable to start.
  • Sorts, aggregates and operations which make heavy use of tempdb may fail if tempdb data or log files can't grow to accommodate these operations.
  • The transaction log may fill up and all DML operations on your database may fail and point in time recovery is lost.
  • Your database files themselves may fill up and all DML operations on your database may fail.
  • A database experiencing out of control growth of its transaction log or data files may fill up the entire disk causing other databases applications to fail.
Steps for monitoring disk space in SQL Server

   Auto-growth settings for logs and files
   Monitoring database file sizes
   Using SQL Server trace files
   Perfomance monitoring in SQL Server

 
Auto-growth settings for logs and files

SQL Server's logs and database files grow according to the auto-growth settings which they inherit from the model database.

There are three options for auto growth:

  • Enabled (the default)/disabled (the default for both log and data files is enabled).
  • Growth in percent/megabytes (the default for log files is 10% and the default for data files is 1 megabyte)
  • Maximum file size/unrestricted growth (the default for both log and data files is enabled).

Most DBAs accept these defaults, however, having your SQL Server database files auto growing is not a good idea as:

  • While the data files and log files are growing transactions are serialized which causes performance problems for applications using the database.
  • Constant auto-growing will make your database and log files fragmented.
  • An out of control growing log or database file can consume all the available space on a drive and cause space problems for other databases or applications.
  • When you run out of space on a drive, you typically have to add an additional data or log file on another drive, which can cause performance or logistic problems. It could also cause placement of the file on a drive that is not optimized for the I/O activity expected with that database file. For example, you might be forced to place database files on a RAID 10 drive dedicated for transaction logs. This will cause unnecessary I/O contention with the transaction logs.

The best practice is to size your data and log files to sizes that minimize the number of auto growths. For example, if you expect your database to grow 10% each year you might want to size it 30% to allow plenty of room to begin with.

For data files, the best practice is not to use the Growth in Percent, but rather the Growth in Megabytes, and select a value that will work for your database. For example, small databases might set the Growth in Megabytes to 1 Gig, where very large databases (VLDBs) may set it to 10 Gigs or more. The reason for this is that the default percent growth (10%) for a 400 Gig database will be 40 Gigs – a large uncontrolled jump at any one time. Setting it to 1 Gig or 2 Gigs will create auto growths in a much more controlled manner.

To size your transaction log files you should:

    1. Shrink the log as much as possible.
    2. Dump every 10-20 minutes.
    3. Monitor its growth over a week - noting the maximum size.
    4. Lastly, dump the log, shrink it one more time to the minimum size and then manually size it for this observed maximum size.

This prevents too many virtual log files, which can lead to performance problems.

 
Monitoring database file sizes

There are two parts to proactive monitoring of SQL Server database sizes:

  • Existing size
  • Auto growths

You want to know how full your disks are currently, how full the database and log files are, and which ones are experiencing auto growth.

Xp_fixeddrives will give the amount of free space available on local fixed drives. To get a more complete picture of your disk sizes, you need to use something like the below script.

Note: For SQL 2005 you will need to enable Ole Automation Procedures:
sp_configure 'Ole Automation Procedures',1
reconfigure with override):

Then create and run this procedure:
Click here to view and/or download the procedure.

I find the query below to provide me with most of the information I require to collect on my servers' databases:


These results can be piped into a table with a structure like this one:

 CREATE TABLE [dbo].[DatabaseFileSizes]( [DatabaseName] [nvarchar](128) ,
[DatabaseFileName] [sysname] , [fileid] [smallint] NULL, [drive]
[nvarchar](1) , [filename] [nvarchar](260) , [filegroup] [nvarchar](128) ,
[size] [nvarchar](18) , [maxsize] [nvarchar](18) , [growth] [nvarchar](18) ,
[usage] [varchar](9) ) ON [PRIMARY]

You can sum up by drive to determine the total size consumed by all of your database files. You will need to run the above script at a predefined interval to determine when you are getting dangerously low on space.

It is wise to monitor your database files to get an idea of which ones are filling up and approaching the level where they will auto grow. Sp_spaceused is ideal for this. Detecting auto growths themselves is more complex.

Detecting auto-growths

There are two tools for doing this:

  • Trace files
  • Performance monitor

 
Using SQL Server trace files

I find using trace files are the best, as they can detect real time data and log files auto-growths. The below script illustrates how to set up your trace:
Click here to view and/or download the script.

The following allows you to interactively read your trace.

Note: A number representing a handle to your trace will be displayed by the above script. Enter it below, in my case it was 1.

select * from fn_trace_getinfo ( 1)
select databaseid, spid, duration,starttime,servername,
events=case when eventclass=92 then
'database '+db_name(databaseid)+' data file increased' else
' database '+db_name(databaseid)+' log file increased' end from fn_trace_gettable (
'C:\Autogrow.trc', 1) where eventclass in (92,93)

You can run this query once every 1 minute and then handle the log and database file auto grow events as you see fit, i.e. email, pager, etc.

 
Perfomance monitoring in SQL Server

You can only use performance monitor to monitor log auto growths – there is no counter for data file growths. Use Performance Monitor to monitor the performance objects SQL Server Databases: Log Growths, Percent log Used, and Data File Size.

Performance Monitor allows you to watch specific databases or all databases, and if necessary, raise alerts to send net messages or write events to the Windows NT application log, which monitoring software (like NetIQ) can then pick up and react to.

Summary
Monitoring your SQL Server database files is an essential operation of all DBAs. Failure to do so can result in downtime and data loss. DBAs must be proactive in monitoring their database file sizes. In this article we have looked at three ways of monitoring auto-growths of database files. Choose a method that works for you. The trace method does provide real time monitoring of auto-growths. It must be used in conjunction with a method which gathers information on available disk space – like the method discussed in the T-SQL method. Performance Monitor does allow you to monitor log file auto and growths and the database file size, and allows alerts to be raised when conditions get low.

ABOUT THE AUTHOR 
Hilary Cotter has been involved in IT for more than 20 years as a Web and database consultant. Microsoft first awarded Cotter the Microsoft SQL Server MVP award in 2001. Cotter received his bachelor of applied science degree in mechanical engineering from the University of Toronto and studied economics at the University of Calgary and computer science at UC Berkley. He is the author of a book on SQL Server transactional replication and is currently working on books on merge replication and Microsoft search technologies.
Copyright 2007 TechTarget

This was last published in April 2007

Dig Deeper on Microsoft SQL Server Performance Monitoring and Tuning

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close