Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Pros and cons of the DACPAC with SQL Server 2008 R2

Data-tier applications are among SQL Server 2008 R2’s most talked-about new features, though version-one limitations could minimize their impact in the short term.

Microsoft has combined SQL Server 2008 R2 and Visual Studio 2010 to give us a new feature called data-tier applications,...

or DACs. Deployment of DACs from Visual Studio into SQL Server is done through a data-tier application package, also known as a DACPAC.

The good

DACPACs are great for small shops that are deploying minor application changes to SQL Server. It allows the code to be kept within the existing source control and gives developers an easy way to edit SQL Server objects from within the already familiar Visual Studio environment. This means that developers can work against their development database for all their coding, and then package up all the changes into a single DACPAC for release to production. The DACPAC can also be handed off to a DBA for release into a production or quality assurance (QA) environment.

The DACPAC handles versioning of the database through data-tier editing, providing the developer with an easy way to use the database editing system. The .NET developer has the ability to edit the tables, procedures, views, and functions of the database.

The bad

Data-tier applications have a few issues in this first release. The first problem is that not all features of the SQL Server engine are supported by DACPACs. This includes the SQL Server Service Broker, CLR Objects, and most importantly, SQL Server security.

Now all these features can be supported by using a post-release script. This is not the best solution, however, as developers need to know all the appropriate T-SQL commands in order to create and manage the objects and security.

Currently DACPACs can only be pushed to SQL Server 2008 R2, and they must be developed using Visual Studio 2010.

The ugly

The biggest problem with DACPACs has to do with the way a data-tier application is released to push version changes from the DAC into SQL Server. This is done by creating a new database with a temporary name, generating the new objects in the database, and then moving all the data from the existing database to the new one. After all the data has been transferred and the post-release scripts run, the existing database is dropped and the new database is given the correct name.

This release technique causes the database to require at least twice as much data space as it is in size, as well as enough log space to hold at least the largest object within the transaction log of the destination database. For example, if your database is 5 GB in size with the largest table being 500 MB, you will need room on the disk to hold both 5 GB databases with enough space for the transaction log to hold the entire 500 MB table.

There are a few problems with this technique. First of all, your transaction logs become useless. This is because the database is being renamed and you can’t restore the transaction logs over the course of the database upgrade.

Another problem is created if you are using SQL Server Service Broker within your database, as any messages that were in queue during the upgrade process will be lost. The same goes for any data changes made to tables after the release but before it’s completed; they will be lost as well.

Now you can make a DACPAC out of an existing database so it doesn’t need to be used for only new projects, but not all databases can be successfully turned into DACPACs. As of this writing, if you try to turn the sample database pubs into a DACPAC (a database which has been around since the days of SQL 6.x), you’ll get an error from Visual Studio when it tries to create the DACPAC.

OK, so why use data-tier applications?

After reading all of this, the first question is probably going to be, why use DACPACs? The answer here is simple -- SQL Azure. As you may have noticed, the features that are supported by DACPACs line up with the features that SQL Azure Database supports in its current version. Because of the small amount of data that can be fit into a database through Azure (1 GB or 10 GB as of this writing, depending on the database size you have purchased) this release technique will probably work OK. With SQL Azure, you don’t have to worry about backups since they are handled through the redundancy of the solution.

While data-tier applications were designed for SQL Azure Database, they are perfectly capable of being used against your local in-house databases -- provided you are able to work within the limits of the DACPAC platform.

Because of the release technique used by the DACPAC system, it is recommended that you not use DACPACs for your Tier 1 applications or apps larger than the 10 GB database size supported by SQL Azure. Doing so will require longer downtimes during upgrades as the data is moved between the old and new databases.

Obviously Microsoft hasn’t made any announcements about DACPAC version 2, which will hopefully support the rest of the SQL Server 2008 R2 feature set and allow DACPACs to be released to older versions of Microsoft SQL Server.

More new R2 features

  • PowerPivot for Excel -- Self-service BI for users
  • Master Data Services – Data quality management
  • Multi-server management features -- New dashboards in SSMS
  • Looking for more info? Check out our list of the 10 most important changes to SQL Server 2008 R2.

Denny Cherry has over a decade of experience managing SQL Server, including MySpace.com's over 175-million-user installation, one of the largest in the world. Denny's areas of expertise include system architecture, performance tuning, replication and troubleshooting. Check out his blog at SQL Server with Mr. Denny.

Dig Deeper on Microsoft SQL Server 2008 R2