Shared databases or private databases?

Traditionally database developers have shared a database and while this certainly made sense when everyone had a limited amount of ram and a few hundred megabytes of hard disk space it has been a while since we have had a development machines without enough resources to easily run a few copies of visual studio and SQL Server - if this isn't the case then you really need to re-evaluate the cost of a new machine versus the cost of developers time.

Dedicated databases make many things easier, you do not need to run the installer for SQL Server on everyone's machine and lets be honest the SQL Server installer is one of the worst (the worst?) installers in a Microsoft product that I have seen in a long time - you need to install the setup files before using those to install the product, odd.

Dedicated databases also mean that developers who do not want to know about SQL Server can develop their app code and just point it to the shared database and develop to their hearts content.

Dedicated all the way?

No, that was the past and we are now in the future, while dedicated databases do make some things easier they also cause problems and promote bad practices. The process that I recommend is this:

  • 1. Each developer has their own instance, ideally on their own machine
  • 2. When the developer wants the latest version of the database they check out the code
  • 3. They then use sqlpackage.exe or SQL Compare (or whatever tool they use in the other environments) to deploy the changes to their local database
  • 4. They develop their code however they want (hint use SSDT)
  • 5. When happy the developers check in their changes which kicks off the normal CI process and merges their changes with all the other changes made by other developers
  • 6. When the other developers want the latest version of the database they follow from step 2, rinse and repeat


This forces everyone to use source control, if it isn't in source control then other developers won't get your changes - the most important part of this is the process that when you check your code in you get to merge your changes with everyone else and you can decide what to do about conflicts. Without source control you get into the situation where changes can be lost or changed while you are developing - this is the main reason that in Redgate's source control you have the ability to lock objects, however in my opinion this is the wrong way to fix the problem, just stop using shared instances.

what else?

What we should all be aiming for as SQL developers is to build an automated deployment process including CI and CD because of the well documented benefits of releasing little and often - if we are to do that then we need to trust the tools and to trust the tools we need to use them, when the developers check out the latest version of the code create a script to deploy the changes to their private database and use this same script when deploying to the other environments and it will quickly become the most well tested part of the deployment which incidentally is often seen as the riskiest part of making an automated deployment pipeline. Now that is a real win.

anything else?

I often see on stack overflow and other forums people asking how to do things like create a report of all the things that are in a shared database but not in source control so they can speak to the developers and tell them to check their code in. There are indeed ways to do this BUT and this is a big BUT, you shouldn't have to tell developers to check their code in, if they don't check code in then it doesn't exist and giving them a shared instance and letting them be lazy is not the right thing to do, just force them to use private databases, there will be an initial bit of pain but in the long run it will make their development process much simpler.


If you have a single shared database for development then it makes feature branching hard, with private instances you check out the branch you want, deploy that specific code to your private database and then you are ready to go, if more than one person wants to modify an object in a different way with a shared database you will keep tripping over each other.

non arguments

If everyone has private instances it makes it hard to backup the databases - the simple answer is just do not backup your development databases, check-in everything you need to be able to rebuild a development database from scratch and if you lose the database throw it away, get into this habit and you will be a winner.

The cost of private instances is prohibitive - you can use localdb or sql express which are free. If you have a developer using a development instance of SQL Server then they are licensed to use as many instances as they want so go wild installing as many development instances as you like (just don't forget each developer needs a pretty cheap license).

The developers don't want to do it - it is a bit of a pain to get started with but if you raise it as a possibility every time someone deletes someone else's change or the development database has a problem (someone deletes all the records etc) then they might see the benefits. If you are more of a dictator than a team member then just force it on them and live the dream.


Great article, Ed. I like the clarity of using dedicated for dev, and having the changes arrive at a shared CI environment via source control.

I've come across a couple of other reasons for using a shared database: sometimes performance concerns require devs to use a large database while developing, and they can't handle that on their local machines. For others the database is one part of a large and coupled dev environment, and they don't have the boundaries to develop and test it independently.

The people in those situations often realise the downsides of using a shared database, but don't know how to get around the problems. I've more sympathy for them (and it's for them that we added the locking feature to Redgate's SQL Source Control), but I'd be interested to know if you've come across ways for them to move towards the process you describe?

Hi, thanks!

For me performance comes down to two main things, firstly you need to know how SQL fundamentally works and how your schema is designed and secondly testing - you don't need to develop on a database with a realistic data set you can use a stats only database (I wish there was a better UI for that!) to get exec plans.

There should also be a step in the CI process for performance testing with realistic data but you don't necessarily need to have developed the code on actual data, in fact I worked on one system where we developed purely using tSQLt mocking in our local databases and that worked well but everyone knew SQL and how to write SET based T-SQL which helped a lot.

It is definitely a subject that is important and I will be writing about it properly soon!


How's this for the best of both worlds:
We have two physical locations (Europe, Asia), each of which has one shared database per TFS branch, all of which are deployed to via SSDT check-ins to a single TFS server (using release management for CI). So everyone always has the latest checked in schema, and management of dev data is greatly simplified.
When we had just one location, the same setup also worked well.
We do have the option to script directly to a shared database for testing (though we must consider the broader effects first), with the knowledge that our changes may be overwritten by the next check-in. Therefore, for extensive changes, we take a backup of the database to work on locally until we're ready to check in.

nice and it really shows how each environment is unique so you need to figure out what works best for you.

It is also really good to see that you understand what the effects of a shared database can have!


Add new comment