Manage Learn to apply best practices and optimize your operations.

SQL Server database capacity planning: A head-on approach

IT administrators often don’t give a lot of thought to SQL Server database capacity planning -- until the database fills up. But managing database capacity now can help cure headaches later.

Yawn. What time is it? Three a.m.? Why is your cell phone buzzing? Oh, the database at work filled up. Great; go...

fix that. You might as well have a cup of coffee while you’re at it, since the alarm will go off in a few hours.

With today’s fast-growing data needs, it’s not unusual for databases to run out of space, requiring some manual attention. Most database administrators just set their databases for automatic growth -- which isn’t optimal for performance. There are better approaches to SQL Server database capacity planning.

Don’t rely on auto-growth. It’s OK to leave auto-growth on, but remember: It’s just a failsafe. In fact, every time it kicks in, kick yourself, because you’ve just accepted a decent performance hit. You’ve also made it easier for database files to become fragmented, since auto-growth allocates hunks of disk space at a time.

Instead, manage capacity proactively. Keep an eye on available space and data growth trends and perform the right maintenance steps -- which I’ll get to -- to keep the database from using more space than it needs. Set auto-growth to a decent percentage of space, too, so you won’t need to rely on auto-growth more than once a day or week. That way, if auto-growth starts up, you’ll have time to manually manage the database before further growth is needed.

Grow in a maintenance window. The best time to grow a database is during an offline maintenance window, so add enough space to last until the next available window. During the maintenance window, allocate new space to the database files, close the database and let Windows defragment the New Technology File System (NTFS) file system on which the database sits.

Ideally, a database file -- whether it’s an MDF primary file, an NDF secondary, or an LDF log file -- should occupy contiguous disk space in NTFS’ logical map of the storage. When your database is on a storage area network or RAID array, it’s ideal to have the database occupy contiguous physical sectors, because it’s easier for the storage subsystem to move back and forth within the database file structure.

Combat internal fragmentation. Indexes can cause internal fragmentation within the database files. As index pages fill, SQL Server must split them, creating new pages at the end of the database file to store new data. That defragmentation can waste space within the database file, so rebuild your indexes periodically. Rebuilding essentially deletes the old index and builds a fresh one, leaving a configurable amount of space for new data to be written without splitting pages again.

You’ll also want to periodically defragment the internal database structure. This is much like defragmenting a hard disk: All the pages associated with particular database elements, such as indexes or data pages, will be reorganized into contiguous streams. You’ll pick up a decent amount of space, since data can also be reorganized across pages, so fewer pages are used. Fewer pages means more free space for growth inside the database files.

Know when to shrink. There are two approaches to database shrinking. The first is the actual shrink operation: SQL Server reduces the size of database files, returning unused disk space to the operating system. This operation is often accompanied by defragmentation, which moves all free space to the end of the database file structure so that more space can be returned to the operating system. You probably won’t do this very often; it’s typically a reaction to a database that is no longer getting a lot of new data and is consuming disk space that’s needed elsewhere.

Another kind of shrinking can occur with an index rebuild. When you specify a high fill factor, index pages are rewritten with less free space for new data, meaning the index itself gets crammed onto fewer pages. This usually improves database read performance and frees up space inside the database structure. It’s a bad idea for databases that are gaining a lot of new data, because those mostly full pages will need to be split sooner rather than later, and splitting creates fragmentation and slows read and write performance.

It’s tempting to let your SQL Server databases go it alone, ignoring capacity issues until the server’s physical disks start to fill up -- at which point the problem pretty much requires attention. But by actively managing capacity within the database files themselves, you can improve performance and get better utilization out of your database.

Don Jones is a Microsoft Most Valued Professional and a senior partner and principal technologist at Concentrated Technology, an IT education, strategic consulting and analysis firm. Contact him at

This was last published in January 2012

Dig Deeper on SQL Server Migration Strategies and Planning



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.