Skip to main content

SQL Server Data Tools: Setup Continuous Integration for your Database Projects

In Visual Studio 2013, the SQL Server Data Tools are available out-of-the-box. They give you a new project template, the Database project, that allows you to manage your database objects and scripts in a structured way.

In this post I want to explain how you can enable continuous integration and automatically deploy your database changes each time you check-in some code.

Preparation

Before we start make sure that you created a database project and added some database objects. On your TFS build agent, the SQL Server Data Tools should also be installed. If you are using Visual Studio Online(like I will do) together with a hosted Build Controller, the Data Tools are already installed.(Here is the full list of installed components: http://listofsoftwareontfshostedbuildserver.azurewebsites.net/)

Add a Publish Profile

Open Visual Studio and load your Database Project. Once Visual Studio is ready, right-click on your Database Project and click on Publish…

image

The Publish Database window is loaded. Click on Edit… to add a connection string to the database where you want the database project published to. Afterwards click Save Profile As… to save the Publish profile and add it to your Database Project.

image

As we don’t want to deploy immediately, you can click on Cancel to close the window after the Publish profile is saved.

Creating a new Build Definition

Now everything is configured correctly, it’s time to open up Team Explorer(View –> Team Explorer).

image

Go to the Builds tab to add a new Build Definition.

image

Specify a name for your Build and move on to the Trigger tab.

image

Set the Trigger type to Continuous Integration and jump directly to the Build Defaults tab.

image

On the Build Defaults tab, select the Build Controller where the SQL Server Data Tools are installed. In my case I’m using Visual Studio Online, so I choose the Hosted Build Controller. I also choose to copy the build results to a Drops folder inside source control. Let’s move on to the Process tab.

image

On the Process tab, open up the Advanced section and paste the following line in the MSBuild Arguments field:

/t:build /t:publish /p:SqlPublishProfilePath=NorthwindDBProject.publish.xml

Replace the SqlPublishProfilePath value with the name you choose for your Publish Profile in the Database Project.

image

Save the Build definition, you are done! There are a lot of other things, you can configure, but by default, this should be enough.

Check-in and see what happens…

As a last step, do some changes inside your database project, check them into TFS, and look how your database is rolled out for you…

image

Popular posts from this blog

.NET 8–Keyed/Named Services

A feature that a lot of IoC container libraries support but that was missing in the default DI container provided by Microsoft is the support for Keyed or Named Services. This feature allows you to register the same type multiple times using different names, allowing you to resolve a specific instance based on the circumstances. Although there is some controversy if supporting this feature is a good idea or not, it certainly can be handy. To support this feature a new interface IKeyedServiceProvider got introduced in .NET 8 providing 2 new methods on our ServiceProvider instance: object? GetKeyedService(Type serviceType, object? serviceKey); object GetRequiredKeyedService(Type serviceType, object? serviceKey); To use it, we need to register our service using one of the new extension methods: Resolving the service can be done either through the FromKeyedServices attribute: or by injecting the IKeyedServiceProvider interface and calling the GetRequiredKeyedServic...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...