-->

Ads 720 x 90

More on log shipping

In this post I will follow up on my previous post on log shipping. I did some test and will give you a detailed explanation of how to set it up for SQL 2005, including preparations and 'what to do' to switch from the primary to the backup server. I also looked into the SQL 2000 compatibility and this is bad news: you cannot log ship from SQL 2000 to SQL 2005. It would have been such a nice way to migrate a database from one version to another but it just won't work.

Preparing for log shipping
there are not a lot of preparations to make:

  • both primary and backup server must be part of the same domain or there must be a trust between the domains
  • SQL Server and SQL Server agent should run using a domain account
  • all logins which are used to access the database should be present on the backup database server
  • a share must be available which this domain account has write access to
  • the share must have enough space to store the transaction logs (and optionally a full backup)
  • create a directory per database you want to use in log shipping
  • the database should not yet exist on the backup server
  • check that the primary backup is not part of any other backup / maintenance plan

The last one is not a real requirement but if the database already exists, it should have a very recent full dump of the primary database as its starting point. In practice, it is easier to remove the database and create the full dump and restore in on the backup machine as part of the setup. In that case you do need additional space for the full dump on your share.
The preparation with regards to logins keeps the amount of work when you have to failover to a minimum. If possible, use AD groups as logins because in that case you do not have to worry about finding out which user accounts are needed when things go wrong. You are also advised to use database roles in your database to grant permissions.

Setting it up
To setup mirroring for a database, execute the following steps. You can do everything from a single instance (or, in fact, from any system on which you have SQL Server Management Studio installed):

  • select the source or primary database
  • from the context menu, select tasks | ship transaction logs ...
  • enable this as a primary database in a log shipping configuration
  • click 'backup settings'
  • specify the location for the transaction logs
    • use the share and directory you prepared earlier (preferred)
    • or use a local path (in which case the backup has to pick it up there so this location must be shared also)
  • specify the time frame during which logs are kept online (default: 72h)
  • specify the time frame to send an alert if no backup was made (default: 1h)
  • specify a job name and a schedule (default: LSBackup_[dbname] / every 15 minutes)
  • review and confirm the backup settings

 
 

  • click add on the primary log shipping dialog to add at least one backup server
  • use the 'connect' button to connect with the backup server
  • specify the name of the backup database (default: same as primary)
  • specify how to create the backup database
    • let SQL Server generate a full dump and restore that one (preferred)
    • let SQL Server restore the database from an existing full backup
    • or you take care of setting up the initial database
  • go to the 'copy files' tab
  • specify a temporary working location (preferred: local directory)
  • specify the time frame to keep files in the temporary working location (default: 72h)
  • specify a job name and a schedule (default: LSCopy_[source]_[dbname] / every 15 minutes)
  • go to the 'restore transaction logs' tab
  • select database state of the backup database
  • no recovery mode (default)
  • standby mode (preferred; see below)
  • select 'disconnect users to restore log'
  • delay restoring backup at least ... (default: 0 minutes)
  • alert if no restore occurs within ... (default: 45 minutes)
  • review and confirm the backup server settings
  • repeat above for additional servers if necessary

 
 

  • (optionally) check 'use a monitor server instance'
  • click 'settings' to specify the monitor server
  • click 'connect' to specify the name of the monitor server
  • specify how to connect to the monitor server
  • use the proxy account (default; will use the SQL Server agent service account)
  • use a specific SQL account (specify name and password)
  • specify how long history information should be retained (default: 96h)
  • specify name of the alert job (default: LSAlert_[monitorserver])
  • review and confirm the monitor server settings

 
 

  • (optionally) script and save the complete settings as T-SQL script
  • click 'Ok' to start setting up the log shipping configuration

This might seem like a lot of work but once you have followed this procedure a few times you will see the logic in the various settings. A lot of defaults are very reasonable, so you only have to review them. The amount of work done by the monitor server is very minimal but if you have an additional server available, it makes sense to set it up. It provides a place to keep information about the status of the log shipping.
With regards to the database state: by default it will be set to 'No recovery' which means you cannot look in the backup database. Now, normally you do not want that, you need the backup to take over in case the primary fails. However, the 'standby mode' makes a lot of sense too: you can use the backup system for reporting or you can use it as a dba to verify that the process is working correctly. Trusting SQL Server is all good and so on but I like to see if changes have arrived in the target database and standby mode allows just that.
Last but not least: about the schedules. By default they will be run every 15 minutes at the exact same time. This might give problems because they should be run in sequence. By modifying the start time to 0:00:00 / 0:05:00 / 0:10:00 (backup / copy / restore) you achieve 2 things: the jobs will not interfere with each other and the lag will be shorter.  If you have large backup files you may want to increase the delay before restoring to give the copy process more time to complete.

Failing over
If, at a certain point, the primary server becomes unavailable, you have this nice backup server to replace it. However, there are a number of things you have to do before you can really use this backup server.

  • try to make a last transaction dump from the primary server (if possible)
  • restore any missing transaction dumps
  • either in the last restore or all by itself issue a 'with recovery' command:

    restore database [dbname] with recovery 

  • check / update security
  • change connection strings to point to the new database

If you used AD groups only and assigned them permissions using database roles, there should be no security updates needed. If you used SQL accounts, you have to remember that the internal IDs of those accounts will differ between primary and backup server. You can either use the sp_change_user_login or simpley remove the SQL account from within the [dbname] | security | users and assign the proper roles to the account in security | logins. Usually, the latter is easier.
The connection string is a bit of a problem. If you have a web application, it is not too bad. You only have to fix (probably) one or two places. For client / server applications it pays off to have a central location where the application picks up its connection string info.

Conclusion
My first tests were not an immediate and complete success. In the end, I found out I violated one of my own requirements. I still have to run the whole test once more but I'm pretty confident that solves the problem. I was pleasantly surprised with the ease of use which resulted from the use of AD groups. Still, this is not an ideal solution, stay tuned from an alternative approach.

Related Posts

There is no other posts in this category.

Total Pageviews

Subscribe Our Newsletter