We are facing a serious SQL Server timeout problem in our application. We were using vb.net and ADO.NET to develop the application and SQL Server 2000 database. We have installed the SQL Server in our server with Windows Server 2003 and we have about 50 users connecting to it now. How can I troubleshoot the problem?
It is necessary to break down the performance issues by tier. On SQL Server I would recommend leveraging a tool to determine the objects that require the most resources. From that point forward, determine techniques to improve the most critical issues.
Yes -- from an idealistic perspective it is beneficial to split the data from the indexes in separate file groups for large databases with hardware with numerous controller cards and hard drives to split the IO. I would test the performance with your existing hardware and database, then make the change to see the performance gain. Good luck!
I am using SQL Server 2000 and it is working very slowly. There is a table containing 50,842 rows and even a simple select command takes 15 seconds to retrieve all records in the Query Analyzer. In ASP.NET it takes almost one minute to fill the data adapter. Do you have any suggestions?
I would find out if the table has been indexed or if maintenance has been conducted lately. To determine if the table has any indexes, issue sp_help for the table to determine all related objects. I would also contact your DBAs to determine the maintenance schedule and find out if this table has been included in the maintenance procedures.
As a primer, I would recommend the following tips:
- SQL Server performance-tuning worst practices
- Checklist: Load testing -- the magic bullet for SQL Server performance tuning
I think this clustering question comes down to cost and server management more so then raw performance. In the first scenario, one server is handling all of the processing and the second is idle until a failover occurs. So you will have the same performance with the second server as you do with the first. In the second scenario, both servers would need to operate at half capacity (i.e., 50%) CPU usage so that if an issue occurs one server can handle the original 50% of the load and the failed server's 50% of the load.
If both servers are running at 100% capacity, then when a failure occurs, one server will be running at 200% capacity and the performance will not be able to support the organization. This would certainly be detrimental to the performance. A third option may be to have two servers with four CPU's running at 50% capacity each in order for one server to support the load after a failover. From a performance perspective, I would think in a worst case scenario, which is probably your best bet for a high availability/disaster recovery, select either option one or three that I offered. I hope this all makes sense!
This is a great question. A single clustered index can be created on a table and this index physically sorts the data for the entire table based on the values in this column. A table can have up to 255 nonclustered indexes. Nonclustered indexes are based on order of the data, but do not physically sort the data.
A table can have one of the following index configurations:
- No indexes
- A clustered index
- A clustered index and many nonclustered indexes
- A nonclustered index
- Many nonclustered indexes
For more information about clustered indexes visit:
For more information about nonclustered indexes visit:
I hope this is helpful to you!
In practice with the server cost in mind, my recommendation would be to group the databases on a RAID array and the transaction logs on a separate RAID array. Properly configured hardware is a core component for SQL Server performance and based on the criticality; stay tuned for additional tips relative to hardware configurations.
In Enterprise Manager you can right click on the Job and select Stop Job. Here is a pertinent screen shot:
From a T-SQL perspective the command would be sp_stop_job. Another option to troubleshoot the issue would be to consider a third party product by the name of SQL Sentry. I have used the product, recommend it and think it could help troubleshooting the issue.
I assume the optimizations you are mentioning are from the SQL Server Maintenance Wizard. I would recommend issuing T-SQL commands as opposed to letting SQL Server handle this as a 'black box' so to speak. I would analyze your indexes to ensure that they are properly designed and maintained. I would recommend researching the following commands from a SQL Server Maintenance perspective:
We are trying to install a storage area network (SAN) for our database. The total database size (MDF) could grow to 200 GB in next two years, while the total log file size (LDF) would not exceed 50 GB. The tempdb may be around 10 to 15 GB.
I could use your advice in determining the following information from a performance point of view:
- Location of the databases or files
- RAID level
- Number of files to use for the tempdb
- How to configure the SAN for the above requirements
At a high level, I would recommend having this discussion with your DBAs, network engineers, storage administrators and SAN vendor. Depending on the SAN, you can have a variety of options. I would say to physically split your databases from your transaction logs on the SAN, and split your SQL Server data from other processing. You should plan how the SAN will be used in the long term so you do not have I/O contention. Next, the RAID level is SAN dependent. The fastest and most costly would be RAID 1, but I would see what other options your vendor can offer. For tempdb sized at 10 to 15 GB, I would recommend 1 MDF and 1 LDF. Finally, make sure you have redundancy to your SAN from your SQL Server, so you have no single points of failure.
|ABOUT OUR EXPERT:|
| Jeremy Kadlec is the Principal Database Engineer at Edgewood Solutions, a technology services company delivering full spectrum Microsoft SQL Server Services in North America. He has authored numerous articles and white papers on various SQL Server topics and delivers frequent presentations at regional SQL Server Users Groups and nationally at SQL PASS. Jeremy also serves as a co-leader for the Northern Virginia SQL Server User Group. He is the author of The Rational Guide to IT Project Management and holds a Master's Degree in Information Systems from The University of Maryland, Baltimore County. Contact Jeremy at (410) 591-4683 or firstname.lastname@example.org.