Optimizing SQL Server 2000 settings

Several SQL Server configuration settings must be changed from their defaults to maximize your database transactions per minute. Find out what they are and how to change them in this tip.

SQL Server does well configuring itself for optimal performance, more so than any other RDBMS on the market today.

However, there are several settings you must modify to squeeze extra transactions per minute (TPM) out of your servers. The purpose of this tip is to discuss these settings. We will ignore performance improvements that you can make via hardware configurations or in table or indexing design as they are outside the scope of this tip.

 

Torn page detection

Before we plunge into server configuration switches, let's have a quick look at your model database -- or the template used as a basis to build new databases. You can create stored procedures, defaults, functions and so on in this database, which will then be added to all newly created databases.

To optimize performance, you may want to disable torn page detection in the model database. Torn page detection identifies when a page has been successfully written to disk. If enabled, you'll get a slight performance hit with every write operation. Most modern drive arrays have an onboard battery, which will allow the arrays to complete all write operations in the event of an abrupt power outage -- the most frequent cause of torn pages.

The following steps explain how to disable torn page detection:

exec sp_dboption 'model', 'torn page detection', 'false'

This KnowledgeBase resource provides more information about this setting.

 

Server configurations

Most configurations can be done through the system stored procedure sp_configure. To display the full list of server settings available for you to customize, issue the following:

sp_configure 'show advanced options', 1
GO
RECONFIGURE WITH OVERRIDE

The number of options you can configure depends on your SQL Server version, service pack and bit version (64-bit SQL Server has more options than 32-bit). I will go over the options that have the most impact on optimizing SQL Server performance.

Affinity mask: Affinity mask allows you to control which processor(s) SQL Server will run on. For the most part, you should not touch this setting and let the operating system control processor affinity. However, you may want to use this option to dedicate a processor to another process (i.e. MSSearch or SQL Server disk IO and the balance to SQL Server). Refer to this KnowledgeBase resource for more information on this setting.

Awe enabled: Awe enabled allows SQL Server Enterprise edition running on Windows 2000 Advanced Server and above or Windows 2003 Enterprise Edition and above to use more than 4 GB of RAM. Enable this setting if your Server fits these criteria.

Cost threshold for parallelism: The cost threshold for parallelism determines if a query is a candidate for parallel processing. The default is five seconds. Changing this value to a lower amount will allow more queries to take advantage of parallel processing, but it could cause CPU bottlenecks. This setting is only effective on multi-processor machines.

Fill factor: Fill factor sets the default fill factor used when creating clustered indexes. On heavily inserted tables, you may benefit from a lower fill factor than the 90% default.

Lightweight pooling: This setting enables fiber mode. Use this option on eight-way and above servers, where CPU utilization is high. It will allow fibers to spawn for each thread that runs by default on each processor. Certain tasks can take advantage of these fibers.

Max degree of parallelism: This setting determines if the server can use parallelism or will not use parallelism or whether only a certain number of processors will be used for parallel operations. Parallelism is where processing occurs simultaneously on multiprocessors. For instance, parallel operations of a query might be executed simultaneously on different processors.

Max server memory (MB): If you have other processes running on your SQL Server and ample RAM, you may want to leave 512 MG of RAM available to the OS and these processes. For example, you may set this to 512 for MSSearch or a large number of merge agents running locally.

Max worker threads: The Max worker threads setting is similar to the connection pool setting in ADO.net. With this setting, any user connection over the threshold (255 users) will wait in a thread pool until a thread is free for this connection to be serviced, just like connections are shared with connection pooling in ADO.net. If you have a large number of connections and large amounts of RAM, you may wish to bump up this value.

Network packet size (B): This setting controls the size of the packets transferred across the network to your clients. On lossy networks (such as phone lines), you may want to set this to a lower value than the default of 4096. On well-connected networks, you may want to bump up this setting, especially for large batch operations involving BLOBs.

Priority boost: This setting gives a processor boost to SQL Server. In Task Manager, click on the Processes tab, locate SQL Server and right click on it. Select "Set Priority." Notice that SQL Server should be running at Normal priority. Issue the following:

Sp_configure 'priority boost', 1
Reconfigure with override 

Then restart your SQL Server. In Task Manager, check the priority that SQL Server now runs under. It will be running at High priority. SQL Server will run under a higher priority than other user processes. Use this setting on servers dedicated to SQL Server.

 

Summary

This completes our tour of the most common SQL Server settings for optimizing your SQL Server. It is important to baseline before and after making these changes in a representative test environment under a representative load to assess their impact on your system.

About the author: Hilary Cotter has been involved in IT for more than 20 years as a Web and database consultant. Microsoft first awarded Cotter the Microsoft SQL Server MVP award in 2001. Cotter received his bachelor of applied science degree in mechanical engineering from the University of Toronto and subsequently studied both economics at the University of Calgary and computer science at UC Berkeley. He is the author of a book on SQL Server transactional replication and is currently working on books on merge replication and Microsoft search technologies. 

 


More information from SearchSQLServer.com

This was first published in August 2005

Dig deeper on Microsoft SQL Server Performance Monitoring and Tuning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close