Problem solve Get help with specific problems with your technologies, process and projects.

Can you shrink your SQL Server database to death?

In an effort to free up disk space, SQL Server DBAs often turn to shrinking the database as part of their SQL Server maintenance plan. When doing so, the database often needs to grow again to accommodate the new data inserted into the database. But what are the performance impacts of this growth. In this tip, SQL Server expert Michelle Gutzait examines what really happens when a database file grows as the result of a transaction. Take a look at these test results involving autogrowth and see if the shrink database task is really necessary.

Part 1 | Part 2 | Part 3

Too often I see people creating SQL Server maintenance plans and customized jobs for shrinking the database files (data and/or T-Log) periodically. I always recommend to my clients and colleagues not to shrink database files, especially on a regular basis, before they investigate the cause of the database's growth.

I had this discussion a few times already with people who are more concerned about disk space than about performance, so I decided to test it in order to prove to myself and others what the SQL Server performance impacts are when a database file grows as a result of a transaction.

I am about to test a few growth scenarios:

  1. Database data file growth
  2. Database T-log growth
  3. Tempdb growth

This is the first tip in the series and examines database data file growth in SQL Server. 

TESTING ENVIRONMENT

Hardware and software configuration

The hardware and software configuration of the system where I am performing the tests is as follows:

Hardware and software configurations for test system
Figure 1: Hardware and software configurations for test system.

SQL Server 2005 SP2, Enterprise Evaluation Edition.
The computer has one local iSCSI LUN.

The database configuration

I created a database named ShrinkDB with the following database files and properties:

 ShrinkDB database files and properties.
Figure 2: ShrinkDB database files and properties.

The following query:

select size from sysfiles where fileid = 1

Returns: 256
This is the size of the file (in 8-KB pages). 256*8 K = 2 MB. 

In this database, I created the following table:

create table ExpandDB (a varchar(8000))

Objective

My goal is to test performance based on the size of the transaction and the data file autogrowth ratio:

  1. Test 1 – small autogrowth, short transactions, X amount of rows inserted
  2. Test 2 – small autogrowth, short transactions, X*Y amount of rows inserted (more rows inserted than in Test 1)
  3. Test 3 – bigger autogrowth, short transactions, X*Y amount of rows inserted (identical to the amount inserted in Test 2)
  4. Test 4 – small autogrowth, X*Y rows inserted in one transaction
  5. Test 5 – bigger autogrowth (like in Test 3), X*Y rows inserted in one transaction
  6. Test 6 – very big autogrowth, short transactions
  7. Test 7 – very big autogrowth, large transaction

I will analyze the differences in the results and try to come up with a definitive conclusion.

Note: Concurrency is excluded from the tests.

Methods and code description

For each test, I determine an initial file size and an autogrowth ratio of the database file. For example: initial size = 2 MB, autogrowth = 1 MB.

I also have a target file size for the database file, so the first loop to be executed is supposed to insert rows until at least the target size for the data file is reached:

while (select size from sysfiles where fileid = 1) <= <X>
insert into ExpandDB select replicate ('a',8000))

This loop inserts 8000-byte rows into the ExpandDB table, row by row, until the database file grows beyond X. The ExpandDB table is a heap table, with no index or constraint, to avoid any special optimization or index overhead.

After running the first loop and reaching the target file size, for comparison I need to execute the same amount of inserts but without growing the file. For that reason, I have to truncate the table and clean up the transaction log – to make sure the T-Log does not grow either.

The following query:

select count(*) from ExpandDB

Returns the amount of rows inserted in the first loop, and

select size from sysfiles where fileid = 1

Returns the new database data file's size.

The following will truncate the table and T-Log:

truncate table ExpandDB
go
backup transaction ShrinkDB with truncate_only
go

The next step is to execute the same insert commands:

declare @i int
set @i = 1
while @i <= <Z>
begin
insert into ExpandDB select replicate ('a',8000)
set @i = @i + 1
end

Where <Z> is the amount of rows that we had in the table as a result of the first loop.

Another important thing is to clean the cache before executing first and second loops in order to have a fair performance comparison and to eliminate inequality related to caching:

DBCC DROPCLEANBUFFERS

And last, each test will be executed three times in a row (three iterations) to make sure the results are compatible.

Hence, the first step in each iteration will be to make sure the desired data file's initial size is reached. For example, shrinking the database file:

DBCC SHRINKFILE (N'ShrinkDB' , 0, TRUNCATEONLY)

Performance monitoring tools

The performance is monitored using SQL Profiler.

Results summary

The summary of the results is represented in a tabular comparison of CPU, reads, writes and duration between the first and the second loops – first with file growth and then without file growth.

There are two lines of comparison: Percentage difference (<Second loop is >/<First loop is > *100) and amount of difference (<First loop is > - <Second loop is >). If the value is positive, it means the second loop performs better. If it's negative, it means the opposite. At the end of each comparison table, the average of the comparisons includes the average of the results from the three iterations.

At the end of this article, a table compares all of the averages of all the tests.

TEST RESULTS:

Test 1

Initial file size = 256,
Target file size = 8448,
Small transactions (1 row at a time),
Total number of rows: 8160,
File growth by 1 MB

The target file size of 8448 is x 33 times more than the original size of 256.

I run the following code a few times:

Figure 3: Click here to download the script above.

Here are SQL Profiler results of three consecutive executions of the above code (I removed the "debugging" queries for convenience):

SQL Profiler results from three executions
Figure 4: SQL Profiler results from three executions. (Click on image for enlarged view.)

Summary of the results:

ITERATION Step CPU Reads Writes Duration (ms)
1 File grows 2031 69850 8222 13045
  File does NOT grow 750 55234 8210 7999
  % improvement 63.07237814 20.92483894 0.145949891 38.68148716
  Difference 1281 14616 12 5046
2 File grows 1859 69853 8218 12735
  File does NOT grow 1078 55533 8210 7230
  % improvement 42.01183432 20.50019326 0.097347286 43.22732627
  Difference 781 14320 8 5505
3 File grows 1985 69853 8221 13150
  File does NOT grow 1360 55316 8210 8324
  % improvement 31.4861461 20.81084563 0.133803674 36.69961977
  Difference 625 14537 11 4826
           
  Average % improvement 45.52345285 20.74529261 0.125700284 39.5361444
  Average difference 895.6666667 14491 10.33333333 5125.666667

Test 2

Initial file size = 256,
Final file size = 34816,
Small transactions (1 row at a time),
Total number of rows: 33373,
File growth by 1 MB
 

Now let's test a higher number of inserts. The code:

SQL code to test a higher number of inserts. Figure 5: SQL code to test a higher number of inserts.

Click here to download the SQL script above.

Summary of the results:

ITERATION Step CPU Reads Writes Duration (ms)
1 File grows 6730 285208 33618 56148
  File does NOT grow 4250 225591 33567 35003
  % improvement 36.84992571 20.9029901 0.151704444 37.65940016
  Difference 2480 59617 51 21145
2 File grows 7031 285223 33619 53373
  File does NOT grow 4204 225364 33564 32472
  % improvement 40.20765183 20.98673669 0.163597965 39.16024956
  Difference 2827 59859 55 20901
3 File grows 6453 285278 33618 54189
  File does NOT grow 3844 225362 33564 33530
  % improvement 40.43080738 21.00267108 0.160628235 38.1239735
  Difference 2609 59916 54 20659
           
  Average % improvement 39.16279497 20.96413262 0.158643548 38.31454107
  Average difference 2638.666667 59797.33333 53.33333333 20901.66667

Test 3

Initial file size = 256,
Final file size = 34816,
Small transactions (1 row at a time),
File growth by 10 MB

In this test, the code is the same as in Test 2, but the database data file is now configured to autogrow by 10 MB.

Summary of the results:

ITERATION Step CPU Reads Writes Duration (ms)
1 File grows 8907 283930 33588 41354
  File does NOT grow 6297 225358 33565 37755
  % improvement 29.30279555 20.62902828 0.068476837 8.702906611
  Difference 2610 58572 23 3599
2 File grows 9078 283902 33588 46607
  File does NOT grow 5578 216971 33560 41202
  % improvement 38.55474774 23.57538869 0.083363106 11.59697041
  Difference 3500 66931 28 5405
3 File grows 9016 283909 33587 52515
  File does NOT grow 6015 225358 33565 38807
  % improvement 33.28527063 20.62315742 0.065501533 26.10301819
  Difference 3001 58551 22 13708
           
  Average % improvement 33.71427131 21.60919146 0.072447159 15.46763174
  Average difference 3037 61351.33333 24.33333333 7570.666667

Test 4

Initial file size = 33664,
Final file size = 66944,
Large transaction (33373 rows),
File growth by 1 MB

In this test, I insert the rows of the ShrinkDB table to the ShrinkTable again, in a large transaction (33373 rows at once). File growth is 1 MB, I had to increase the T-Log file to 200 MB so it will not grow while the transaction is executed. I am testing how file growth affects large transactions.

The code:

SQL code to test how file growth affects large transactions.
Figure 6: SQL code to test how file growth affects large transactions.

Click here to download the SQL script above.

Summary of the results:

ITERATION Step CPU Reads Writes Duration (ms)
1 File grows 3969 880471 33381 36097
  File does NOT grow 3720 879426 33380 18104
  % improvement 6.273620559 0.118686476 0.002995716 49.84624761
  Difference 249 1045 1 17993
2 File grows 3750 880473 33381 35962
  File does NOT grow 3578 879435 33380 20006
  % improvement 4.586666667 0.117891179 0.002995716 44.36905623
  Difference 172 1038 1 15956
3 File grows 3657 880471 33381 36544
  File does NOT grow 3999 879422 33380 18802
  % improvement -9.35192781 0.119140778 0.002995716 48.54969352
  Difference -342 1049 1 17742
           
  Average % improvement 0.502786472 0.118572811 0.002995716 47.58833245
  Average difference 26.33333333 1044 1 17230.33333

Test 5

Initial file size = 33664,
Final file size = 66944,
Large transaction (33373 rows),
File growth by 10 MB

In this test, the code is identical to the one running in Test 4, but the file autogrowth is set to 10 MB.

Summary of the results:

ITERATION Step CPU Reads Writes Duration (ms)
1 File grows 4016 879535 33381 23003
  File does NOT grow 4016 879432 33380 21177
  % improvement 0 0.011710734 0.002995716 7.938095031
  Difference 0 103 1 1826
2 File grows 3672 879531 33381 22501
  File does NOT grow 3673 879439 33380 19965
  % improvement -0.027233115 0.01046012 0.002995716 11.2706102
  Difference -1 92 1 2536
3 File grows 3798 879544 33381 22366
  File does NOT grow 3782 879426 33380 18702
  % improvement 0.421274355 0.013416043 0.002995716 16.38200841
  Difference 16 118 1 3664
           
  Average % improvement 0.13134708 0.011862299 0.002995716 11.86357121
  Average difference 5 104.3333333 1 2675.333333

Test 6

Initial file size = 256,
Final file size = 128256,
Small transactions (1 row at a time),
Total number of rows: 33373,
File growth by 1,000 MB

In this test, I insert 33373 rows, once letting the file grow and another time with no growth.

The code:

Test to allow the SQL data file to grow.
Figure 7: Test to allow the SQL data file to grow.

Click here to download the SQL script above.

Summary of the results:

ITERATION Step CPU Reads Writes Duration (ms)
1 File grows 3343 217006 33382 19335
  File does NOT grow 3500 216989 33383 18020
  % improvement -4.696380497 0.007833885 -0.002995626 6.801137833
  Difference -157 17 -1 1315
2 File grows 3344 217006 33385 30126
  File does NOT grow 3359 217001 33378 16499
  % improvement -0.448564593 0.002304084 0.0209675 45.23335325
  Difference -15 5 7 13627
3 File grows 3578 217006 33382 16884
  File does NOT grow 3234 216989 33382 21183
  % improvement 9.61430967 0.007833885 0 -25.46197584
  Difference 344 17 0 -4299
           
  Average % improvement 1.489788193 0.005990618 0.005990625 8.857505083
  Average difference 57.33333333 13 2 3547.666667

Test 7

Initial file size = 34816,
Final file size = 1314816,
Large transactions (33373 rows),
File growth by 1,000 MB

The code:

Test to allow no SQL data file growth.
Figure 8: Test to allow no SQL data file growth.

Click here to download the SQL script above.

Summary of the results:

ITERATION Step CPU Reads Writes Duration (ms)
1 File grows 3548 867705 45151 24096
  File does NOT grow 3250 878488 34317 20707
  % improvement 8.399098083 -1.242703453 23.99503887 14.06457503
  Difference 298 -10783 10834 3389
2 File grows 3579 879331 33542 21753
  File does NOT grow 3765 872841 39965 22775
  % improvement -5.196982397 0.738061094 -19.14912647 -4.698202547
  Difference -186 6490 -6423 -1022
3 File grows 3781 879411 33452 25014
  File does NOT grow 3422 870858 41918 21235
  % improvement 9.494842634 0.972582786 -25.30790386 15.10753978
  Difference 359 8553 -8466 3779
           
  Average % improvement 4.23231944 0.155980142 6.82066382 8.157970755
  Average difference 157 1420 1351.66667 2048.666667

Conclusion:

Analyzing the averages summary:

Test 1 – small tran, small autogrowth
 

  CPU Reads Writes Duration (ms)
Average % improvement 45.52345285 20.74529261 0.125700284 39.5361444
Average difference 895.6666667 14491 10.33333333 5125.666667

Test 2 – small tran, small autogrowth, more rows
 

  CPU Reads Writes Duration (ms)
Average % improvement 39.16279497 20.96413262 0.158643548 38.31454107
Average difference 2638.666667 59797.33333 53.33333333 20901.66667

Test 3 – small tran, bigger autogrowth, # rows like Test 2
 

  CPU Reads Writes Duration (ms)
Average % improvement 33.71427131 21.60919146 0.072447159 15.46763174
Average difference 3037 61351.33333 24.33333333 7570.666667

Test 4 – large trans, small autogrowth, # rows like Test 2
 

  CPU Reads Writes Duration (ms)
Average % improvement 0.502786472 0.118572811 0.002995716 47.58833245
Average difference 26.33333333 1044 1 17230.33333

Test 5 – large trans, bigger autogrowth, # rows like Test 2
 

  CPU Reads Writes Duration (ms)
Average % improvement 0.13134708 0.011862299 0.002995716 11.86357121
Average difference 5 104.3333333 1 2675.333333

Test 6 – small tran, large autogrowth, # rows like Test 2
 

  CPU Reads Writes Duration (ms)
Average % improvement 1.489788193 0.005990618 0.005990625 8.857505083
Average difference 57.33333333 13 2 3547.666667

Test 7 - large trans, large autogrowth, # rows like Test 2
 

  CPU Reads Writes Duration (ms)
Average % improvement 4.23231944 0.155980142 -6.82066382 8.157970755
Average difference 157 1420 -1351.66667 2048.666667

The results prove that performance decreases due to file autogrowth, especially when transactions are short and autogrowth is small. Also, the higher the autogrowth, the better the performance. The longer the transaction, the less performance is affected by the autogrowth. So when you shrink your database files, you should ask yourself if it's worth it, meaning that the files will not auto grow again in the near future. Furthermore, you should set up the autogrowth threshold according to the expected database activity and growth.

Even when the gaps in CPU, reads and writes between a transaction causing autogrowth and a transaction without it are not very wide, the duration of the transaction is still affected when the file grows.

Next: What happens when Database T-log grows?

Want to know how to disable the shrink database task? Click here.

CAN YOU SHRINK YOUR DATABASE TO DEATH?

ABOUT THE AUTHOR
Michelle Gutzait works as a senior database consultant for ITERGY International Inc., an IT consulting firm specializing in the design, implementation, security and support of Microsoft products in the enterprise. Gutzait has been involved in IT for 20 years as a developer, business analyst and database consultant. For the last 10 years, she has worked exclusively with SQL Server. Her skills include SQL Server infrastructure design, database design, performance tuning, security, high availability, VLDBs, replication, T-SQL/packages coding, and more.

MEMBER FEEDBACK TO THIS TIP

Do you have a comment on this tip? Let us know.

Great article! I always wanted to know about the influence of disk fragmentation on performance, but I've never taken the time to do these tests myself. Thanks a lot!
—Volker H.

******************************************

I often see requests on various message boards about "how to shrink," whether it's the database or the log file. From a performance standpoint, don't do it. If it's due to archiving or a rarely occurring job, it is OK, but it should be the exception and not the rule. Remember that the physical disk is going to become fragmented over time due to growth or shrinking, and it will impact performance. Monitor growth metrics and size the DB accordingly; there are plenty of scripts to do that. Growths and shrinks during business hours will impact performance, so be proactive, not reactive. Stop the madness by planning ahead. Your users will appreciate it.
—Terry C.

******************************************

This is a good article. However, my real interest is in the growth of transaction log files in SQL Server. I see that such an article is planned on this subject by the same author. Without specialist tools, it seems to be impossible to tell what causes logs to grow to very large sizes. Shrinking the log file using Management Studio is not reliable, and on some systems, we have resorted to scheduling a procedure that repeatedly attempts to shrink the transaction log files until it is down to size prior to performing backups. I look forward to seeing if the next article addresses these issues.
—Scott B.

******************************************

Interesting article and results. I would be curious to see what effect defragging the database file would have, especially after a large number of small increments of autogrow. Perhaps running an additional test after the file is grown, taking the database offline and running contig on the database file.

I had a site that ran an application that would do fairly frequent (about 20 times/week) operation of deleting/rewriting or adding 10,000 rows in a transaction. Queries on the database would get very slow, that the applications would get timeouts (hardware was not the greatest either). But I found that if I defragged the database files, performance would increase dramatically. I was not using any shrinking procedures, but it was growing steadily.
—Mark H.

This was last published in September 2008

Dig Deeper on Microsoft SQL Server Performance Monitoring and Tuning

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close