Over the last decade, many complex enterprise applications have been developed and deployed using Microsoft SQL...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Server. Today, SQL Server is a cornerstone of modern business applications and is at the center of the business processes of many leading companies. SQL Server applications range from line-of-business applications in production, to internal customer relationship management and decision support systems, to customer-facing e-commerce and Web self-service applications. Consequently, SQL Server performance and scalability are high on IT priority lists, and delivering optimal SQL Server performance and scalability is one of the key tasks of all SQL Server DBAs.
However, many SQL Server systems suffer from poor performance and scalability, often caused by poor database design, index design and a SQL Server system improperly configured for the workload. The reason for this is that the main goal of the development process of any large-scale SQL Server project is functionality, with performance and scalability frequently treated as an afterthought.
While troubleshooting SQL Server database system performance problems is a difficult task, significant performance improvements can be achieved with a relatively small time investment.
Hardware performance bottlenecks: Memory
Memory affects SQL Server performance more than any other piece of hardware. Therefore, it is necessary to monitor memory usage regularly on SQL Server systems to ensure that the percentage of memory available is higher than 20%. If users are experiencing performance issues and the percentage of available memory drops below 20%, then the problem is insufficient memory allocation. Keep an eye on the average page life expectancy performance counter and make sure it is always above 300 seconds (5 minutes). Anything less indicates either poor index design leading to increased disk input/output (I/O) and less effective use of memory or an actual shortage of memory. Monitor the paging rates on the SQL Server system, and make sure they are regularly above 1,000 pages per second. Check the PerfMon object MSSQL Buffer Manager and the Memory Performance Counters.
Also, monitor the counter, Memory Grants Pending in PerfMon object SQL Server Memory Manager Counters. This counter indicates the total number of processes per second waiting for a workspace memory grant. Typically, small OLTP transactions do not require large memory grants. Anything greater than a memory grant of zero for an OLTP transaction indicates low memory in a SQL Server system.
One way to handle memory bottlenecks is to find memory-intensive processes, which can be used to identify potential application problems such as memory leaks. You can also review queries to optimize performance to consume less memory. Another approach is to scale up the SQL Server environment by adding more physical memory (RAM) to the SQL Server. Scaling up is usually a good approach to address any performance bottleneck related to memory.
Disk I/O use
Compared to other hardware resources, storage input/output is usually the slowest of the system resources in SQL Server. Therefore, it is important to monitor the storage system to see if storage has become a performance bottleneck. If it has, the next step is to investigate whether or not you can optimize the design and configuration of the storage system to achieve scalability and high performance. Review the PerfMon disk counters for Average Disk Sec/Read and Average Disk Sec/Write. Make sure that the time a read or write takes is, ideally, less than 12 milliseconds for OLTP systems and higher for decision support systems.
As with memory, the easiest way to solve a disk I/O performance bottleneck is to scale up the SQL Server environment by replacing existing disks with faster disks that can better cope with the I/O load and that distribute the I/O load across multiple spindles. Also, defragment the data disk regularly.
CPU performance bottlenecks occur for a variety of reasons. They include having a non-optimal query plan, an inadequate design application or database design, poor SQL Server configuration or a lack of hardware resources. Review the PerfMon operation system CPU and processor counters for Processor Queue Length to verify that the number of threads waiting for CPU cycles is eight or less. If this number is greater than 12, it means that the CPU is causing the performance issue.
Once you have identified a CPU bottleneck, use sys.dm_os_wait_stats dynamic management view (DMV) to identify the top ten worst-performing queries for the CPU, as shown below.
SELECT TOP 10 (a.total_worker_time / a.execution_count) AS [Avg_CPU_Time]
,Convert(VARCHAR, Last_Execution_Time) AS [Last_Execution_Time]
,SUBSTRING(b.TEXT, a.statement_start_offset / 2, (
WHEN a.statement_end_offset = - 1
THEN len(convert(NVARCHAR(max), b.TEXT)) * 2
END - a.statement_start_offset
) / 2) AS [Query_Text]
,dbname = Upper(db_name(b.dbid))
,b.objectid AS 'Object_ID', B.*
FROM sys.dm_exec_query_stats a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
ORDER BY [Avg_CPU_Time] DESC
You can then tune these queries and the underlying indexes to resolve the CPU bottleneck. Also, configure your SQL Server to use all available CPU machines. You can also scale up your SQL Server system by adding additional CPUs or upgrading to a new server with more and faster CPUs.
Database design issues: Highly normalized database
Poor database design leads to inadequate database performance. For example, the highly normalized database is associated with complex relational joins. This results in long-running queries that waste system resources such as CPU, memory and disk I/O. Obviously, a highly normalized database degrades SQL Server and database performance significantly. The general rule for writing efficient queries is to redesign the database if any operation requires five or more table joins.
Duplicate and unused indexes
Indexes are the solution to many performance problems, but having too many indexes on frequently updated tables can incur additional overhead because SQL Server performs extra work to keep indexes up-to-date during insert/update/delete operations. This means that the SQL Server database engine needs more time when updating data in the table based on the number and complexity of the indexes. Also, index maintenance can increase CPU and I/O usage, which can be detrimental to performance in a write-intensive system. Remove any duplicate and redundant indexes as they are a pointless drain on system resources.
In SQL Server, we have
sys.dm_db_index_usage_stats DMV that can be used to identify unused indexes. This DMV gives statistics on how an index has been used to resolve queries. Alternatively, you can run Database Engine Tuning Advisor (DTA) to identify unused indexes. For more information, see Tutorial: Database Engine Tuning Advisor.
To remove duplicate indexes, see my article about detecting and removing duplicate indexes.