Sometimes SQL Server performance tuning best practices do not hit home, but the worst practices do. In part one of this two-part series, we will address a number of the worst practices for SQL Server performance that have been found in the field. These primarily relate to the system performance over the lifecycle of an application. We also will make recommendations for correcting these practices and improving overall system performance.
Worst practice #1: System performance not a component of requirements analysis
Observation: Typically, system performance is not a consideration when the system is originally scoped and while the requirements are gathered. Unfortunately, system performance becomes an emergency situation for IT managers if the system slows and users begin to complain about its responsiveness.
Recommendation: During the early stages of the project, ask probing questions relating to the number of users, amount of data growth, response time and so on. In many cases, these questions cannot be answered by the users or the project sponsor early in the project since no one knows how much the system is going to grow during the next one, three and/or five years.
But it is essential, from an IT perspective, to define those numbers based on the information available as a baseline set of expectations for the life of the application. Consider doubling the estimate for resources to support potential user demands. As the application is tested and released to the production environment, remember to revisit the estimates to make sure they are on target.
Worst practice #2: No dedicated development and test environments
Observation: Application development is conducted away from production in a separate database on a production server or developer's workstation. Testing is kept to a minimum, and then the code is promoted to production where inevitably the remainder of the code is "corrected" as users uncover issues and IT addresses the issues.
Recommendation: Set up a small server with minimal resources to serve as a dedicated development environment for the entire team with the same versions of Windows and SQL Server as production. Purchase a similar server to use as a dedicated test server. In order to limit licensing expenses, use an MSDN version of SQL Server or the Developer Edition of SQL Server 2000. These servers can prove valuable for functional, load and regression testing, and the team can validate the performance prior to releasing the systems to the users where SQL Server performance can yield negative results.
Worst practice #3: No load testing
Observation: Load testing is rarely, if ever, conducted unless the company has a dedicated QA/QC team with load-testing tools. This is largely because of the amount of time that's required to load test an application as well as the associated cost of a load-testing tool. When a new release of the application goes to production without load testing by a QA/QC team with the appropriate tools, performance issues are left uncovered. Ultimately, IT must address the problems in emergency mode.
Recommendation: Implement this recommendation and avoid costly downtime. Although a load-testing tool can be expensive and conducting comprehensive load testing can be a daunting task, a reasonable amount of load testing should be performed to prevent issues from impacting users in the production environment. You can conduct basic load testing by capturing transactions from a single user with SQL Server Profiler performing the major functions in the application. Then customize the T-SQL parameters for using with Database Hammer, which is a free tool that ships with the SQL Server 2000 Resource Kit.
Worst practice #4: No SQL Server maintenance
Observation: Over the last year, I have consulted at a half-dozen companies where I witnessed little or no maintenance done on the SQL Servers. Some of these companies are on SQL Server 6.5 or 7.0 with hardware that is three to 10 years old. Others are leveraging new hardware with Windows 2003 and SQL Server 2000, but IT has never rebuilt an index and there is a large amount of fragmentation reported by DBCC SHOWCONTIG.
Recommendation: Addressing system performance should be an iterative process. But when it is something that has fallen by the wayside, starting this process is a large undertaking. Do not postpone these tasks to the point where they become so large that they are out of control. For instance, it is common knowledge that every 3,000 miles you need to get your car's oil changed. You do not always follow your mechanic's recommendations, and when you don't, you can guess what the problem is when your car starts smoking, and then you pay dearly. You know you can prevent the situation easily by performing routine maintenance. SQL Server is no different. The server cannot sit in a rack forever with no maintenance. If it does, then just like your car, there will be problems. Regularly scheduled maintenance reaps huge benefits for overall system performance.
What are some of the worst practices you have found? Let me know, then stay tuned for the next installment of SQL Server performance-tuning worst practices when we will offer more observations from the field and simple recommendations that will improve overall system performance.
Want more peformance-tuning worst practices? Click for part two.
About the author: Jeremy Kadlec is the principal database engineer at Edgewood Solutions, a technology services company delivering professional services and product solutions for Microsoft SQL Server. He has authored numerous articles and delivers frequent presentations at regional SQL Server users groups and nationally at SQL PASS. Jeremy is the SearchSQLServer.com Performance Tuning expert. Ask him a question here.