SQL Server allocates memory into two basic types of cache. The first is the memory most people are familiar with -- the buffer cache. That's where SQL Server stores data that it has loaded from the disk. With buffer cache, return trips to the disk are not necessary. The second type of memory is procedure cache. This is where SQL Server caches query execution plans it has run. The larger the procedure cache area, the more execution plans SQL Server can keep in memory, which reduces the amount of time the system takes to prepare the query for execution.
A procedure cache space that is too small is just as bad as having a buffer cache space that is too small. If it doesn't have enough space in the procedure cache to store all of the execution plans, SQL Server has to recreate the execution plans for the queries as they are run. That causes the CPU load on the SQL Server to go up. With the exception of joining tables on non-indexed columns, generating execution plans is about the most expensive operation SQL Server can perform.
Checking how much procedure cache is in use is not the easiest metric to measure. Microsoft has not provided many performance monitor counters related to the procedure cache. Unlike the buffer cache, which has a Page Life Expectancy counter that tells you how long data is expected to remain in memory, the procedure cache only has hit percentages that tell you how often SQL Server is able to find an execution plan in memory.
SQL Server uses several methods to determine how much memory is used for the buffer cache and the procedure cache. The factors include the version of SQL Server that you're using, the amount of memory you've allocated to your SQL Server instance, and which platform you are using (x86, x64 or Itanium).
The 32-bit platform
The 32-bit platform (x86) has the least amount of buffer cache. Procedure cache cannot reside within the Address Windowing Extension (AWE)-controlled memory space; it has to be in the first 2 Gigs of RAM allocated to the SQL Server instance. On the 32-bit platform SQL Server 2000 and SQL Server 2005, both use the same base calculation to find the amount of procedure cache to use. SQL Server will use up to 1 GB of memory, or 50% of the memory, whichever is lower.
The 64-bit platform
With the 64-bit platform (x64 or Itanium), things get a bit more complex.
SQL Server 2000 64-bit uses the same base calculations as the x86 edition of SQL Server 2000. With SQL Server 2005 64-bit, the build of SQL Server is also used to determine how much procedure cache will be used. SQL Server 2005 Service Pack 2 (build 9.00.3042) introduced some modifications to the method SQL Server uses to calculate the amount of procedure cache available. Microsoft made the change because of problems with some customers' SQL Servers allocating two much memory to the procedure cache and not leaving enough memory for the buffer cache. While this probably only affected a small number of customers, the platform needs to be able to support all customers.
Microsoft, unfortunately, didn't do a very good job of announcing the change, so there was a bit of a surprise when shops installed Service Pack 2 on machines with a very large amount of memory installed and that used a large amount of procedure cache.
In SQL Server 2005 SP1 and earlier, the system uses 75% of the first 8 Gigs of RAM + 50% of the next 56 Gigs of RAM + 25% of the remaining RAM. A SQL Server instance with 64 Gigs of RAM
allocated to it could have a maximum of 34 Gigs of procedure cache allocated to it. A system with 12 Gigs of RAM could have a maximum of 8 Gigs of procedure cache allocated to it.
In SQL Server 2005 SP2 and later, the system uses 75% of the first 4 Gigs of RAM + 10% of the RAM over 4 Gigs. In the same system with 64 Gigs of RAM allocated to the SQL instance, SQL Server could have a maximum of 9 Gigs of RAM. A system with 12 Gigs of RAM could have a maximum of 3.8 Gigs of RAM for SQL Server.
You can see how systems that require large procedure caches would instantly show major performance issues after SQL Server 2005 Service Pack 2 was installed – the massive drop of procedure cache would cause those large systems to do thousands of additional recompiles that they didn't need to do previously.
Right now, no documentation has been released that shows how SQL Server 2008 will calculate the amount of procedure cache available to the database.
It is extremely important to know how much procedure cache is needed and also how SQL Server determines the maximum amount of procedure cache available. You should also know how this number would change as you install various service packs and patches to your production systems. Unfortunately, procedure cache issues -- like the change between service packs -- are hard to test in a non-production environment unless you have solid load-testing procedures in place where you can simulate a full system load in a non-production environment.
ABOUT THE AUTHOR
Denny Cherry has over a decade of experience managing SQL Server, including MySpace.com's over 175-million-user installation, one of the largest in the world. Denny's areas of expertise include system architecture, performance tuning, replication and troubleshooting. He currently holds several Microsoft certifications related to SQL Server and is a Microsoft MVP.
Check out his blog: SQL Server with Mr. Denny.
This was first published in June 2008