VB.NET timing out when backup changed to hourly

After changing transaction log backups from daily to hourly, one ITKnowledge Exchange member noticed that VB.NET was timing out. Find out what advice his peers have to offer.

The following question was posed to ITKnowledge Exchange members. Read the problem then peruse some of the responses. Click here for the complete discussion.

Question

Member "PaulCHsu" writes: "We have changed our back up schedule for transaction logs from daily to hourly recently. Now my VB.NET 2005 program has started getting a 'fail to read table' error (maybe caused by timing out) during the time backup is executing. Is there anyway we can prevent this?"

Answer #1

Member "Schoenbaum" writes: "Rather than doing a direct backup of the transaction log while the database is in use, you should consider using log shipping to replicate the transaction log to a standby server."

Answer #2

Member "DaveInAZ" writes: "I'd say your guess about a timeout is probably correct. Depending on what database and connection method you're using, you should be able to increase a setting/argument called something like "ConnectionTimeout" either in your application or on the server (or both!).

But the best way to prevent this would be to upgrade either the database server or its administrator. Unless your system is prehistoric or extremely overstressed and underpowered, and therefore totally unreliable, it's totally unnecessary to back up the log that often.

Answer #3

Member "BeerMaker" writes: "The only way to prevent this it to change the connection timeout in serveral places. The timeout is triggered on the client side when a response has not been returned prior to the timout expiration. Set the timout on (a) the database connection object or (b) the recordset (overriding the connecton timeout. Getting a timeout on a read sounds curious. Have you checked to see if you have some other problem. For example, what kind of record locks are you requesting (optimistic?). To speed up your reads verify you are using client side cursors and make sure your doing an indexed read.

Answer #4

Member "mortree" writes: "Log shipping is better. Or if you wrote your desktop application to use procedures on the SQL Server you have those central procedure manually write database files out for backup (watch for race conditions and expect desktops to pause for file writes). But the SQL Agent is what most people use with say 3-10 disk files for transaction logs to hit in rotation.

But as an explanation you might want to check how that new backup is defined. Errors sometimes happen when making silly litle changes. My biggest suspicion accidental SINGLE USER Mode, if you didn't error in programming a ridiculously short wait for timeouts. SINGLE USER Mode would bounce you immediately without even waiting.

I assume that backup is backing up to another server hard drive (fast) rather than tape (minutes) or over a slow dialup network connection. It maybe doing a full back up which means dropping into SINGLE USER Mode or flushing transaction logs to the database which can be similar in impact. The latter has to do with there being two ways to back up transaction logs. Read carefully.

I seem to remember there is also a way to programmatically tell SQL to start another transaction log file while you select an old one to back up. But programming SQL wasn't my forte.

Answer #5

Member "rlreid" writes: "If your transaction dumps are killing your apps, they are taking too long; and need to be done more frequently or to use faster hardware. What, precisely, is the error returned by the database library you are using (and what library are you using? OLEDB, MS dblib, FreeTDS, etc.)? What's the hardware for the translog and the dump file, and are they sharing it with anything else? What's your checkpoint interval?

Assuming the hardware is appropriate, it sounds like you should be dumping logs a lot more often. By dumping more often you keep the log short, therefore it dumps faster.

The transaction logs should be on thier own, very fast device (i.e. NOT on a SATA, USB, etc!). The dumps should be to disk, not tape (you can then copy the dumpfile to tape).

SATA is OK for dumps, USB is not - too slow. SATA is NOT OK for trans logs - unless it's part of a high end iSCSI array.

Most of our databases dump logs every 10 minutes and I have never had an issue with the trans dump blocking other work. The only time I've had issues was when I didn't realize the tempdb translog was on a vanilla SATA disk array. Increasing the timeout in the applicaiton is doable, but it's frankly a hack, not a solution.

Click here for the complete discussion.

This was first published in April 2006

Dig deeper on .NET Development for SQL Server

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchDataCenter

SearchDataManagement

SearchAWS

SearchOracle

SearchContentManagement

SearchWindowsServer

Close