Problem solve Get help with specific problems with your technologies, process and projects.

Get SQL Server log shipping functionality without Enterprise Edition

Log shipping provides great high availability and disaster recovery, but only after paying the hefty licensing costs for SQL Server Enterprise Edition.

 Log shipping in SQL Server is an excellent technique for high availability or disaster recovery solution. However, it is an extremely expensive one. To use native log shipping abilities, you must purchase the Enterprise Edition of Microsoft SQL Server. When using CPU licensing on a dual chip machine, this can easily mean an increase in price of about $35,000 per server. While there are other reasons to purchase the SQL Server Enterprise Edition, if log shipping is the driving force, an additional $70,000 is a very pricy solution. Unlike SQL Server clustering -- which has an Active node and a Passive node -- log shipping requires two active nodes, meaning both hosts must be licensed.

While the log shipping solution I'll discuss in this tip does not remove the need for two active nodes, it

 does not require the enterprise edition. In fact, if you're using SQL Server Express, this solution will work provided you use the Task Scheduler to schedule your jobs instead of the SQL Server Agent. I talked a little about using the Task Scheduler in Tips for scheduling and testing SQL Server backups.


  • Active node -- The node running the database in read-write mode.
  • Primary machine -- The Active node.
  • Passive node -- The node running the database in read-only mode. This is the machine that is receiving the log backups.
  • Backup machine -- The Passive node.

Pros and cons

There are some differences to note between this solution and the native solution. Before you select this solution over the native solution, be sure you understand the differences so you can make an informed decision.

The native solution is very robust and allows for immediate failback upon failing over to the backup machine. However, it is expensive and its logging ability is lacking to say the least. Depending on the problem, it could simply fail with an error number and provide no additional information to lead you to the cause of the problem.

Log shipping without SQL Server Enterprise Edition is much less expensive, but it does have a couple of drawbacks. It currently does not include a way to fail back to the original host within setting up the entire solution in reverse. While this is normally not an issue, it is something to be aware of. Its logging, while not fantastic, is usually better. But because it uses the native BACKUP LOG command and DOS commands for moving log backups between commands, the error data can be lacking on occasion.

Both solutions are designed to recover gracefully from a downed passive node, allowing you to take that passive node down for maintenance. Once you bring it back online, it will take time before the database servers have caught up and both systems are in sync.

Getting started

You'll need two machines and it's recommended that they have the same drive letter and folder layout. While this is not required, it does make the setup and management of this system much easier. It's also better if the drives are the same size -- especially the drive holding the transaction log. When restoring log files, we'll also be restoring database maintenance commands, such as index rebuilds. These commands grow the log on the machine where they normally run. They will also grow the transaction log on the passive node, because file growths are logged commands, which are then sent to the backup machine when the log is shipped.

I create a network share on the drive for the database backups called BackupFolder, which makes working with the network share easier. The domain account that the SQL Server is running under will need access to this network share. I highly recommend running your SQL Server under a domain account.

In addition to the regular drives needed for the databases to function, you should add an additional hard drive to the backup machine, which will be used to hold the log backups. This drive should be at least twice the size of the largest log backup you're anticipating. On most systems, this will be the log backup taken directly after the database maintenance completes. Local storage or SAN storage

More on log shipping in SQL Server:

is best. You can use network storage, but it may impact your backup and restore performance -- especially if the network machine doesn't have enough memory for file sharing operations. In addition, using a network share provides a single point of failure in the system.

In addition to the technical items, you'll need to know -- from a business perspective -- what the acceptable amount of data loss is for the system. Any log shipping solution provides a good solid system backup solution, but in the event of a primary system failure, data that has not shipped to the backup system will be lost. As the DBA, you should make a recommendation based on your knowledge of the database, but senior management should ultimately decide how much data loss is acceptable. If less than one to two minutes or zero data loss is the answer, then log shipping is not the correct solution for your needs. In that case, you need a solution like clustering, which is very different. I have personally used this log shipping solution on a very busy database (tens of thousands of transactions per second) with the job on the primary server running every five minutes without an issue. However, your mileage may vary.

Setting up the log shipping solution

Setting up log shipping is fairly easy and it basically functions the same between SQL 7.0 and SQL Server 2005. There are some minor differences between SQL 7 and SQL 2000/2005, which are noted in the code below.

The sample code here uses the Northwind database. To use your own database, simply replace Northwind with the name of your database. The sample code uses the machine name of "backupsql" for the backup machine. This should also be changed to fit the name of your backup machine.

This entire solution is run from two SQL Server Agent jobs -- one on the primary machine and one on the backup machine. I call the job on the primary machine "Backup Northwind Log" and the job on the backup machine "Restore Northwind Log." Job names can be changed to fit within your naming convention. Simply change the last step of the job on the primary machine to match the name of the backup machine's name. While looking through the job step code, you will notice that the xp_cmdshell procedure is not used in keeping with security best practices.

The only additional piece of setup information is that the user who runs the jobs needs access to run job steps of the type "Operating System Command." The account that runs the primary server also must be able to log into the backup server's database instance and start a job. In SQL Server 2005 and above, this can be done with a proxy account, as needed.

If you are already taking transaction log backups of your SQL Server, disable those prior to beginning the next section.

The code

Before you set up the jobs, first put the database on the primary server into full recovery mode (bulk logged will also work, but full is recommended). Then perform a full backup of the database.

BACKUP DATABASE Northwind TO DISK='\\backupsql\BackupFolder\northwind.bak'

Once the backup is complete, restore the database to the backup server and place the database into standby mode.

RESTORE DATABASE Northwind FROM DISK='n:\BackupFolder\northwind.bak' WITH

Once the backup and restore is complete, it's time to set up the jobs. All of the T/SQL job steps should be run from the master database. Begin with the job on the primary server.

Step 1: Check to see if a backup is already running. This job step will be a T-SQL command and it should be modified so that if the job step fails, success is reported. Do this on the advanced tab of the job step by changing the On Failure Action to "Quit the job reporting success." Set this job to report success so that if the full backup or differential backup is running, you can stop the job without sending out an alert. SQL Server can only run one backup of any one database at a time. If you don't use this job step, the next step of the job will fail.

IF EXISTS (select *
from sysprocesses
where cmd = 'BACKUP DATABASE'
and dbid = DB_ID('Northwind'))
raiserror ('Database is currently being backed up', 16, 1)

Step 2: Back up the transaction log to the backup machine. This job step will be a T-SQL command.

BACKUP LOG Northwind TO disk='\\backupsql\BackupFolder\Northwind.log' WITH NOINIT, NOSKIP,

Step 3: Start the restore job on the backup server. This job step will be an Operating System Command running osql or sqlcmd, depending on your SQL Server version. This job step should be modified so if the job step fails, success is reported. We set this job to report success upon failure because this job may fail on occasion if the restore job takes longer to process than the backup job does. This happens when restoring the logs from the database maintenance or if large updates or deletes are done.

Sqlcmd –S BackupSQL –E –Q "exec msdb.dbo.sp_start_job 'Restore Northwind Log'"

Setting up the job on the restore server

This job has six steps, and the code is a bit more complex.

If you are using this solution on SQL Server 7.0, an additional step is required before setting up the job. A stored procedure is required to return the number of log backups contained within the backup file. Create this stored procedure in the master database. The other five steps of the job are the same regardless of the SQL Server version. The different step is Step 5.

@FileName AS varchar(1000)

Step 1: Check to see if a failure has occurred. This job step is a T-SQL command. If this step fails, then something went wrong in the past and the job has not run since then. Manually try to restore the current file. If the file restores correctly, then delete the .failure file and allow the job to run normally. Do not delete the .failure file until the issue has been resolved.

EXEC master.dbo.xp_fileexists 'n:\MSSQL\MSSQL.1\MSSQL\Backup\Northwind.failure', Exists OUTPUT
IF @Exists = 1
RAISERROR ('There was a prior failure restoring the log to this database')

Step 2: Kill any sessions that happen to be using the database you're restoring to. This job step is a T-SQL command.

DECLARE @spid as varchar(10)
DECLARE @CMD as varchar(1000)
DECLARE cur CURSOR FOR SELECT spid FROM master.dbo.sysprocesses WHERE dbid =
(SELECT dbid FROM sysdatabases WHERE name = 'Northwind')
open cur
SET @CMD = 'kill ' + @spid

Step 3: Remove the file you processed in the last loop. This is an operating system command.

DEL n:\RestoreFolder\Northwind.2.log

Step 4: Move the file that the primary server has been backing up to into place for restore. This is an operating system command. We do this so the primary server can continue to back up the log without error while the restore command is running.

MOVE n:\RestoreFolder\Northwind.log Northwind.2.log

Step 5: (SQL 2000 and later): Restore the transaction log into the database. This is a T-SQL command. The RESTORE HEADERONLY command tells you how many backups there are within the backup file. This command returns a record set that reports to the @@ROWCOUNT variable in SQL Server 2000 and above. In SQL Server 7.0, it does not, so use the stored procedure to run this command and load the data into a temporary table (see below). Then run through a loop and process all of the log backups in order until you get to the last one. Keep the database in standby mode while looping through the logs. This step needs to be modified so that on success, the job completes reporting success, and on failure, the job continues on to Step 6.

DECLARE @i int
DECLARE @j int
SET @j = 1
SET @i = 0
RESTORE HEADERONLY FROM DISK='n:\RestoreFolder\Northwind.2.log'
WHILE @i+1 > (@j)
RESTORE LOG Northwind FROM DISK='n:\RestoreFolder\Northwind.2.log'
STANDBY = 'n:\MSSQL\MSSQL.1\MSSQL\Backup\Northwind.undo'
SET @j = @j + 1

Step 5: (SQL 7.0): Restore the transaction log into the database. This is a T-SQL command. Because you need to use a stored procedure to get the number of backups in the file, you must create a temporary table with the same format at the output from the RESTORE HEADERONLY command, which you can then insert the data into. This step needs to be modified so that on success, the job completes reporting success, and on failure, the job continues on to Step 6.

DECLARE @i int
DECLARE @j int
SET @j = 1
SET @i = 0
(BackupName varchar(255),
BackupDescription varchar(255),
BackupType int,
ExpirationDate datetime,
Compressed int,
Position int,
DeviceType int,
UserName VarChar(255),
ServerName varchar(255),
DatabaseName varchar(255),
DatabaseVersion int,
DatabaseCreationDate datetime,
BackupSize numeric(24,0),
FirstLsn numeric(24,0),
LastLsn numeric(24,0),
CheckpointLsn numeric(24,0),
DatabaseBackupLsn numeric(24,0),
BackupStartDate datetime,
BackupFinishDate datetime,
SortOrder int,
CodePage int,
UnicodeLocaleID int,
UnicodeComparisonStyle int,
CompatibilityLevel int,
SoftwareVendorID int,
SoftwareVersionMajor int,
SoftwareVersionMinor int,
SoftwareVersionBuild int,
MachineName varchar(255))
INSERT INTO #HeaderTable
EXEC master.dbo.sp_GetRestoreCount 'n:\RestoreFolder\Northwind.2.log'
SET @i = (SELECT COUNT(*) FROM #HeaderTable)
DROP TABLE #HeaderTable
WHIKE @i+1 > (@j)
RESTORE LOG Northwind FROM DISK='n:\RestoreFolder\Northwind.2.log'
STANDBY = 'n:\MSSQL\Backup\Northwind.undo'
SET @j = @j + 1

Step 6: Write a file to stop the restore job run from running until the failure is fixed. This is an operating system command.

echo test > n:\MSSQL\MSSQL.1\MSSQL\Backup\Northwind.failure

After you have created these jobs, manually run the BACKUP LOG command once from the primary server. Then manually start the job on the restore server on Step 3 (after first making sure there are no active spids running on the backup server). After this, schedule the job on the primary server to run.

I hope you find this solution to use log shipping without SQL Server Enterprise Edition as scalable and flexible as I have. I've personally used this solution -- or a slight variant of it -- at three companies without issue, and I've given it to several others on other forums who have found it very useful in their shops.


Denny Cherry has over a decade of experience managing SQL Server, including's over 175-million-user installation, one of the largest in the world. Denny's areas of expertise include system architecture, performance tuning, replication and troubleshooting. He currently holds several Microsoft certifications related to SQL Server and is a Microsoft MVP.

Check out his blog: SQL Server with Mr. Denny.


Do you have a comment on this tip? Let us know.

One thing that is missing from this tip is the suggestion to cleanup your backup/restore history, particularly the restore history on the secondary/backup/passive server. I recently inherited several systems that had roll-your-own log shipping solutions. I had msdb database sizes of 2GB because the solution did not clear out the restore history tables. The more frequent your transaction log backups/restores, the quicker msdb will fill up.

If you do end up with a large msdb because of this, some indexes added to a couple of the tables will speed up the execution of sp_delete_backuphistory:

CREATE NONCLUSTERED INDEX ix_mediasetid ON backupset(media_set_id)
CREATE NONCLUSTERED INDEX ix_restorehistoryid ON restorefile(restore_history_id)
CREATE NONCLUSTERED INDEX ix_restorehistoryid2 ON restorefilegroup(restore_history_id)

Run DBCC UPDATEUSAGE(0) WITH COUNT_ROWS to update table usage by counting every row.

Finally, run sp_updatestats 'resample' when done.
—Scott D.


This tip is great advice. I have used it in a test environment for approximately one month and it works very well. This weekend, I'll put it in production; thanks for sharing your knowledge.
—Angel M.


Dig Deeper on SQL Server High Availability, Scalability and Reliability