Problem solve Get help with specific problems with your technologies, process and projects.

Clearing the Windows page file and its effect on server performance

Before enabling the Windows page file setting, consider your server size, hard drive speed and the security of your Windows environment.

Back when 2 to 4 GB of RAM in a server was the standard, system administrators often enabled the Shutdown: Clear...

virtual memory pagefile setting throughout the Windows domain.

Although from a security standpoint this setting is a good idea, from a practical standpoint, there are several negative side effects.

What does the page file setting do?
The page file setting writes zeros to the entire page file after the bulk of the operating system has shut off, but just before Windows turns off the computer (or triggers the reboot process). By default, Windows will leave any data in the page file and overwrite it as needed when the system comes back online.

The pros of clearing the Windows page file
The page file setting is great for security-consensus systems administrators or environments with servers that are physically accessible.

Since the file is left intact, if someone rebooted a server and booted the machine from a thumb drive, CD, or any other method that loads an OS other than the normal OS, they could take the pagefile.sys files from the root of the C Drive -- or whatever drive Windows is installed on -- and read the data stored on this page file.

For database servers, the page file could contain customer information while in file servers it could be made up of internal report data and employee personal information. Furthermore, the page file for mail servers could contain internal information not intended to be shared with the outside world.

The cons of clearing the Windows page file
When servers had just a couple of gigabytes of RAM, their page files were only a few GBs as well. Therefore, having the page file setting enabled would add only a minute or two to the shutdown process (or potentially less depending on the hard drive speed and how much of the page file had been used).

Adding RAM to the server increases the size of the page file, and as a result, increases the time it takes to complete a reboot.

Using the Microsoft formula of 1.5 GB of page file for every GB of RAM, a machine with 8 GB of RAM would require a 12 GB page file. This many gigabytes take a while to write, especially if the OS is on a two disk RAID 1 array.

Before I get flamed by people telling me that they have controllers with 8 to 16 GB of cache in there servers, let me note that performance will be different for everyone.



Assuming a 75 MB per second transfer max rate on the boot array (about the average time for decent 10K drives), that 12 GB file will take at least 2 minutes to write zeros to. It can easily take a lot longer because in reality hard drives can't write at their maximum speed for very long. This is because the 8 to 16 MB cache on the disks and the 1 to 2 GB of cache on the controller fill quickly.

Before I get flamed by people telling me that they have controllers with 8 to 16 GB of cache in there servers, let me note that performance will be different for everyone.

Many servers -- especially larger servers -- have more than one controller: a lower-end controller for handling the boot array and a higher-end controller for the data array(s). This is especially true of SAN attached servers. So why should you pay for a higher-end controller when the server is used only for booting.

For example, at a finance company that I worked at, the page file setting was enabled on the entire domain. Most of the servers were older and had only 1 GB of RAM maximum. Having this setting enabled did not really matter at the time, however when the servers were upgraded to new boxes with 4 to 8 GB of RAM, reboot times on certain servers shot up to 45 minutes.

A long reboot time may not be an issue if your server is clustered (or you are using some other technology for high availably) because services will remain online. This can be a problem with standalone servers, however, since the services that the servers provide are not available while Windows is writing zeros to the page file.

Overall, if you are in a high-security environment or your physical servers are at risk, then enabling the page file setting may be a good idea, assuming that you can afford the system outage that occurs with larger page files.

On the other hand, if you are in a smaller shop and don't have the high availably solutions in place to support the services, this setting probably isn't for you.


Denny Cherry has over a decade of experience managing SQL Server, including's over 175-million-user installation, one of the largest in the world. Denny's areas of expertise include system architecture, performance tuning, replication and troubleshooting. He currently holds several Microsoft certifications related to SQL Server and is a Microsoft MVP.

Dig Deeper on Microsoft SQL Server Performance Monitoring and Tuning