Performance implications of transaction log autogrowth in SQL Server

Take a closer look at the performance implications of shrinking transaction log files in SQL Server.

Part 1 | Part 2 | Part 3

Too often I see people create maintenance plans or customized jobs that shrink the database files (data and/or transaction log) periodically.

In my previous article on shrinking your database to death, I tested and proved that there are some performance implications when a database file grows during a transaction. I then provided evidence on the unpredicted behavior of the transaction log autogrowth while executing INSERT, UPDATE and DELETE operations.

In this article, I will test the performance implications when a transaction log grows during transactions. I'll be using the same environment and configuration as the one described in my previous articles.

The database

The ShrinkDB database I used in this test is initially configured so that the database file is large enough that it will not grow during the transactions. The transaction logog file size is only 2MB, however. The goal is to let it grow with the transaction to measure its implications:

The recovery model of the ShrinkDB database is set to FULL.

So the following query (fileid = 2 → the T-Log file):

select size from sysfiles where fileid = 2

Returns: 256.
This is the size of the transaction log file (in 8 KB pages). 256*8K = 2MB.

The ExpandDB table described in my previous article is used yet again:

create table ExpandDB (a varchar(8000))

The tests

My goal is to test how performance is affected by INSERT, UPDATE and DELETE commands that autogrow the database transaction log.

Since updates, inserts and deletes may behave differently, this test includes three sections, one per operation:

  • UPDATES
  • INSERTS
  • DELETE

Note: In the previous article, I proved that only large transactions autogrow the transaction log.

Insert commands

In this test, I inserted 10,000 rows into the table in one transaction. The same code was executed in both steps, but the autogrowth of the transaction log was set differently:

  • In the first step, autogrowth was set to 1MB;
  • In the second step, autogrowth was set to 10MB.

The two steps were executed three times each and the execution statistics were captured by SQL Profiler. Here is the code:

--  Truncate the table
truncate table ExpandDB
go
--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go
--  Shrink T-Log back to 2MB  (initial size):
DBCCSHRINKFILE (N'ShrinkDB_Log', 0, TRUNCATEONLY

Go
--  Insert 10000 rows
--  Big transaction
begin tran
declare @i  int
set @i = 1
while @i <= 10000
begin      insert into ExpandDB select replicate ('a',8000)
     set @i = @i + 1
end
commit
Go
--  Check size and % free space in T-Log
dbcc sqlperf(logspace
go
select count(*) from ExpandDB
go
 

In the first step, the transaction log reaches around 110 MB. In the second it's around 102 MB.

Here is a comparison of the results (inserts only): 

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 985 65152 10115 30479
  File grows by 10MB 875 65111 10113 15416
  % improvement 11.16751269 0.062929764 1.008403361 49.42091276
  Difference 110 41 102 15063

2

File grows by 1MB 828 65155 10115 32109
  File grows by 10MB 875 65104 10118 13240
  % improvement -5.676328502      0.078274883 0.958971824 58.76545517
  Difference -47 51 97 18869

3

File grows by 1MB 969 65218 10118 37298
  File grows by 10MB 1062 65104 10012 15773
  % improvement -9.59752322 0.174798369 1.047637873 57.71086922
  Difference -93 114 106 21525
           
  Average % improvement -1.36877968 0.105334339 1.005004353 55.29907905
  Average difference -10 68.66666667 101.6666667 18485.66667

I conducted a third test where the transaction log file was set to 130 MB and did not grow during the transaction. This time I did not shrink the transaction log, and the code is as follows:

--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go
--  Insert 10000 rows
--  Big transaction
begin tran
declare @i int

set @i = 1
while @i <= 10000
begin
      insert into ExpandDB select replicate ('a',8000)
      set @i = @i + 1
end
commit
Go
--  Check size and % free space in T-Log
dbcc sqlperf(logspace)
go
select
count(*)
from ExpandDB
go

Now we have a comparison of this last execution and the first execution (where the transaction log was set to grow by 1 MB), with insert commands only:

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 985 65152 10115 30479
  File did not grow 812 65097 10003 5651
  % improvement 17.56345178 0.084417976 1.107266436 81.45936546
  Difference 173 55 112 24828

2

File grows by 1MB 828 65155 10115 32109
  File did not grow 719 65093 10007 8160
  % improvement 13.16425121      0.095157701 1.067721206 74.58656451
  Difference 109 62 108 23949

3

File grows by 1MB 969 65218 10118 37298
  File did not grow 985 65103 10003 6741
  % improvement -1.651186791 0.176331688 1.136588259 81.926644862
  Difference -16 115 115 30557
           
  Average % improvement 9.692172065 0.118635788 1.103858634 79.32419161
  Average difference 88.66666667 77.33333333 111.6666667 26444.66667

Insert summary:
The average improvement metrics demonstrate clear performance enhancement related to the number of times a transaction log grows during a transaction. The less times an autogrow occurs the better, especially for the total duration (almost 80% improvement between 1 MB growth and no growth).

Update commands

In my previous article, it was revealed that updates grow transaction logs when many rows change at a time. For that reason, I executed the following:

--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go
--  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--  Update 10000 rows at a time
update ExpandDB set a =   'aaaaaaaaaaaaaaaaaaaaaaaa'
go
--  Check size and % free space in T-Log
dbcc sqlperf(logspace)
go
 

I ran this twice to make sure that the transaction log grows by the same rate, and was puzzled to discover that on the first try the transaction log grew, but in the second it did not grow at all (staying at 2 MB)!

This might mean that if the values of the rows do not change, the transaction log is not growing. To prove my case, I added a select statement to print the number of rows affected by the update:

--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go
--  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--  Update 10000 rows at a time
update ExpandDB set a =   'aaaaaaaaaaaaaaaaaaaaaaaa'
select @@rowcount go
--  Check size and % free space in T-Log
dbcc sqlperf(logspace)
go
 

The rowcount returned was always 10000.

When I modified the updated value (update ExpandDB set a = 'abcde'), the transaction log once again grew the first time, but not after subsequent executions of the same update.

Therefore, I proved my case that when the values in the rows are not changed, transaction log is not growing.

So I executed the following code, first when transaction log growth was set to 1 MB and then when it was set to 200 MB and did not grow. In the second execution, I removed all the DBCC SHRINKFILE commands so that the transaction log was not shrunk:

--   #1 ITERATION:
--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go --  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--  Update 10000 rows at a time
update ExpandDB set a =   'aaaaaaaaaaaaaaaaaaabc'
go
--  Check size and % free space in T-Log
dbcc sqlperf(logspace)
go
--   #2 ITERATION:
--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go --  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--  Update 10000 rows at a time
update ExpandDB set a =   'AAA'
go
--  Check size and % free space in T-Log
dbcc sqlperf(logspace)
go
--   #3 ITERATION:
--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go --  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--  Update 10000 rows at a time
update ExpandDB set a =   'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz'
go
--  Check size and % free space in T-Log
dbcc sqlperf(logspace)
go
 

As expected, the transaction log ended up in different sizes when it was set to grow by 1 MB at a time.

Here is a comparison of the results (updates only):

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 203 8842 50 271
  File did not grow 187 8840 94 192
  % improvement 7.881773399 0.022619317 -88 29.15129151
  Difference 16 2 -44 79

2

File grows by 1MB 156 13125 47 203
  File did not grow 157 13125 3 267
  % improvement -0.641025641      0 93.61702128 -31.5270936
  Difference -1 0 44 -64

3

File grows by 1MB 781 17919 100 827
  File did not grow 750 17918 31 804
  % improvement 3.969270166 0.005580669 69 2.781136638
  Difference 31 1 69 23
           
  Average % improvement 3.736672641 0.009399995 24.87234043 0.135111518
  Average difference 15.33333333 1 23 12.66666667

Update summary:
It seems that there is an improvement in performance when the transaction log does not grow, especially in the amount of writes.

Delete commands

My goal in this test was the same for deletes as it was with inserts and the updates.

I inserted 10,000 rows into the table and ran the following code to delete all rows at once:

--  Truncate the table
truncate table ExpandDB
go
--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go
--  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--  Delete 10,000:
declare @i int
set @i = 1
while @i <= 10000
begin
    insert into ExpandDB select replicate ('a',1000)
    set @i = @i + 1
end
go
--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go
--  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
go
--  Delete 10,000:
delete from ExpandDB
--  Check size and % free space in T-Log
dbcc sqlperf(logspace)
go
 

Again, in the first, the transaction log was shrunk and the autogrowth was set to 1 MB. In the second step, it was not shrunk (I commented the DBCC SHRINKFILE part) and the transaction log was set to be big enough to not grow.

Here is a comparison of the results (deletes only): 

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 78 7224 1468 7890
  File did not grow 79 7196 1440 495
  % improvement -1.282051282 0.387596899 1.907356948 93.72623574
  Difference -1 28 28 7395

2

File grows by 1MB 93 7220 1468 8372
  File did not grow 62 7194 1441 435
  % improvement 33.33333333      0.360110803 1.839237057 94.80410893
  Difference 31 26 27 7937

3

File grows by 1MB 46 7220 1468 6657
  File did not grow 94 7192 1440 355
  % improvement -104.3478261 0.387811634 1.907356948 94.66726754
  Difference -48 28 28 6302
           
  Average % improvement -24.098848 0.378506446 1.884650318 94.39920407
  Average difference -6 27.33333333 27.66666667 7211.333333

Delete summary:
Once again, performance is much better when the transaction log is not growing, especially in the total duration. CPU difference renders jagged results (probably because the numbers are very low and therefore not accurate).

More rows in the transaction

In order to test the relative performance of larger transactions in relation to the transaction log autogrowth, I repeated all the above tests with 50,000 rows (instead of 10,000) and here are the results:

Insert:

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 2219 103797 7249 26237
  File did not grow 2110 103713 7152 5666
  % improvement 4.912122578 0.080927194 1.338115602 78.4045432
  Difference 109 84 97 20571

2

File grows by 1MB 2250 103797 7250 24083
  File did not grow 2063 103704 7157 3528
  % improvement 8.311111111      0.089597965 1.282758621 85.35066229
  Difference 187 93 93 20555

3

File grows by 1MB 2234 103804 7250 24880
  File did not grow 2078 103711 7157 6010
  % improvement 0.089591923 0.387811634 1.282758621 75.84405145
  Difference 156 93 93 18870
           
  Average % improvement 6.735407947 0.086705694 1.301210948 79.86641898
  Average difference 150.6666667 90 94.33333333 19998.66667

 

Update:

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 313 50055 50023 4868
  File did not grow 250 50032 50000 386
  % improvement 20.12779553 0.045949456 0.04597885 92.07066557
  Difference 63 23 23 4482

2

File grows by 1MB 218 50048 50020 5273
  File did not grow 265 50028 50000 254
  % improvement -21.55963303      0.039961637 0.039984006 95.18300778
  Difference -47 20 20 5019

3

File grows by 1MB 297 50048 50020 4418
  File did not grow 266 50028 50000 282
  % improvement 10.43771044 0.039961637 0.039984006 93.61702128
  Difference 31 20 20 4136
           
  Average % improvement 3.001957646 0.041957576 0.041982288 93.62356487
  Average difference 15.66666667 21 21 4545.666667

 

Delete:

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 375 43054 7299 35894
  File did not grow 406 42911 7155 3138
  % improvement -8.266666667 0.332141032 1.972872996 91.2575918
  Difference -31 143 144 32756

2

File grows by 1MB 343 43053 7293 31672
  File did not grow 312 42911 7155 4933
  % improvement 9.037900875      0.329826028 1.892225422 84.42472847
  Difference 31 142 138 26739

3

File grows by 1MB 359 43057 7269 33549
  File did not grow 297 42907 7155 2990
  % improvement 17.27019499 0.348375409 1.568303756 91.08766282
  Difference 62 150 114 30559
           
  Average % improvement 6.013809731 0.336780823 1.811134058 88.9233277
  Average difference 20.66666667 145 132 30018

It seems that the larger the transaction, the more performance improvement was found by not growing the transaction log.

Larger autogrowth

In order to test the relative performance of larger autogrowth block size, I configured the autogrowth of the transaction log to be 50 MB and ran the following (both insert and delete) three times in a row:

--  Truncate the table
truncate table ExpandDB
go
--  Truncate the T-Log
backup transaction ShrinkDB with truncate_only
go
--  Shrink T-Log back to 2MB:
DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--   Insert 50,000 rows
begin tran
declare @i int
set @i = 1
while @i <= 50000
begin          insert into ExpandDB select replicate ('a',1000)
     set @i = @i + 1
end
commit
Go
--   Truncate the T-Log
backup transaction ShrinkDB with truncate_only
Go
--   Shrink T-Log back to 2MB: DBCC SHRINKFILE (N'ShrinkDB_Log' , 0, TRUNCATEONLY)
Go
--   Delete 50,000: delete from ExpandDB
--   Check size and % free space in T-Log
dbcc sqlperf(logspace)
Go
 

I then compared the results to the previous two (50,000 rows and different autogrowth):

Insert:

ITERATION

Step

CPU

Reads

>Writes

Duration (ms)

1

File grows by 1MB 2219 103797 7249 26237
  File grows by 50MB 2281 103713 7158 8933
  File did not grow 2110 103713 7152 5666
  % improvement 50MB to no growth 7.496711968 0 0.083822297 36.57226016
  Difference 171 0 6 3267

2

File grows by 1MB 2250 103797 7250 24083
  File grows by 50MB 2203 103713 7158 7940
  File did not grow 2063 103704 7157 3528
  % improvement 50MB to no growth 6.354970495      0.008677794 0.013970383 55.56675063
  Difference 140 9 1 4412

3

File grows by 1MB 2234 103804 7250 24880
  File grows by 50MB 2312 103713 7159 12823
  File did not grow 2078 103711 7157 6010
  % improvement 50MB to no growth 10.12110727 0.001928399 0.027936863 53.13109257
  Difference 234 2 2 6813
           
  Average % improvement 7.99092991 0.003535397 0.041909847 48.42336779
  Average difference 181.6666667 3.666666667 3 4830.666667

 

Delete:

ITERATION

Step

CPU

Reads

Writes

Duration (ms)

1

File grows by 1MB 375 43054 7299 35894
  File grows by 50MB 375 42914 7158 10920
  File did not grow 406 42911 7155 3138
  % improvement 50MB to no growth -8.266666667 0.006990726 0.041911148 71.26373626
  Difference -31 3 3 7782

2

File grows by 1MB 343 43053 7293 31672
  File grows by 50MB 407 42910 7152 10854
  File did not grow 312 42911 7155 4933
  % improvement 50MB to no growth 23.34152334      -0.002330459 -0.041946309 54.55131749
  Difference 95 -1 -3 5921

3

File grows by 1MB 359 43057 7269 33549
  File grows by 50MB 391 42910 7126 11133
  File did not grow 297 42907 7155 2990
  % improvement 50MB to no growth 24.04092072 0.006991377 -0.406960427 73.142908477
  Difference 94 3 -29 8143
           
  Average % improvement 13.03859246 0.003883881 -0.1356652 66.31932074
  Average difference 52.66666667 1.666666667 -9.66666667 7282

It seems that the less the transaction log grows – the better.

Summary of findings – Phase 2

I was able to prove that when a transaction log grows during a transaction, this affects the performance of the transaction negatively:

  • The larger the transaction, the more performance is affected
  • The wider the updated rows, the bigger the T-Log becomes
  • The smaller the autogrowth, the greater impact on performance

Note that my tests included a single running transaction. What would happen when the transaction log needs to grow in a multi-user environment? Stay tuned… more tests to come.

Additional considerations

On the same subject, it also has been proven that large number of Virtual Log Files (VLFs) in the transaction log affects overall performance of data modifications for some databases. In that case, it is recommended to grow the transaction log proactively rather than letting it autogrow. For other databases, a larger autogrow is recommended (by size, not by percentage).


CAN YOU SHRINK YOUR DATABASE TO DEATH?

ABOUT THE AUTHOR
Michelle Gutzait works as a senior database consultant for ITERGY International Inc., an IT consulting firm specializing in the design, implementation, security and support of Microsoft products in the enterprise. Gutzait has been involved in IT for 20 years as a developer, business analyst and database consultant. For the last 10 years, she has worked exclusively with SQL Server. Her skills include SQL Server infrastructure design, database design, performance tuning, security, high availability, VLDBs, replication, T-SQL/packages coding, and more.

This was first published in June 2009

Dig deeper on Microsoft SQL Server Administration

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close