Ed Elliott's blog

You have your database in source control now what?

  • Posted on: 16 November 2017
  • By: Ed Elliott

This post is for a specific type of person if you are:

  • New to source control
  • Are getting started on your path to the continuous delivery nirvana
  • Have been able to get your database into some sort of source control system
  • Have more than one person checking in code to the database source
  • You are unsure what yo do next

Then this post is for you!

Choosing a source control system and tools to manage your database code is a great hurdle to have got past, well done!

Actually getting your database into that source control system is another difficult hurdle to get last, double well done!

Now you should be starting to think about the next steps, and the sort of things that are on the horizon is, not immediately but not too far away (1-6 months):

  • Generating deployment scripts
  • Provisioning test databases to deploy to
  • Writing some tests
  • Automating the running of those tests
  • Using a build server to do all this for you
  • Sacking your dba's (jokes ha ha)

But what about now?

The thing that I would do now is to:

  • Determine your branching and release strategy
  • Start checking in and merging changes
  • Generate/gather deploy scripts
  • Nothing else

These things are so important to get right early on. It is hard to choose a correct system until you are where you are now. Until you have chosen a process and started using it, it is hard to know whether it works for you.

Determine your branching and release strategy

How are you going to manage your code while changes are going on? Typically you will want to be able to:

  • Check new features into the code repository
  • Check-in hotfixes to the current production and not lose those changes when the latest features are checked in
  • Allow multiple developers to check code in at the same time

There are a few different approaches, and my favourite is the git approach of creating a branch for each feature then merging back to master when the feature is ready to go to production but this favours getting changes out to production pretty quickly so you may not be ready for this yet.

Have a look at different branching strategies that your source control system uses and decide on one that you and your team can understand and work with.

When you have decided on your branching strategy, stick to it and make sure everyone follows the system. If you have had no source control and suddenly have source control it takes quite a lot of discipline as a development team to ensure you follow the process. Someone will forget to check something in, and someone will skip the process. Keep on eye on check-ins, make sure everyone follows the process - it is a big change in how SQL developers work so understand that this in and of itself is a major change for your team.

Start checking in and merging changes

Checking in code is the next step and having multiple developers means that you will have to start merging each other’s changes. Let the developers check in code and merge their changes, try out different tools for merging changes. If you have TFS, you probably have visual studio which has a decent merge tool built into it. If you are using git look at SourceTree or git Kraken.

You will get problems when you merge your changes, do the first few merges together and see where doing things like reformatting long stored procedures causes extra changes that are more challenging to deal with when other smaller changes to the procedures are merged.

Generate/gather deploy scripts

The next thing you will want to do is start to see some value from all this work, and I’m not suggesting that you start pushing all your changes to production yet (you haven’t even any tests yet!). Whatever type of tool you change chosen (automatically generate scripts/manage migration scripts etc.) find a way to generate those or gather them together, so you no longer have to spend x hours every release “gathering the scripts”. This starts to show value and has the benefit that people can’t bypass the source control system easily.


I don’t mean actually nothing, get on and do some actual work (you lazy little...)! What I mean is nothing more on your ci/cd process for now. You have a good 1-3 months work to do to perfect what you have, to work and get this bit nailed :)

Running any AzureRM powershell command tells you to "Run Login-AzureRmAccount to login." even though you are logged in!

  • Posted on: 15 November 2017
  • By: Ed Elliott

Every AzureRM command I was running I would get an error message telling me to login, I then did a login, checked that I had the right subscription and I was still getting the error message - after logging in 7 times I figured that maybe there was something else wrong :)

It turns out that I had originally installed the Azure RM cmdlets via the Azure SDK and had somehow also managed to install the AzureRM.profile module from the powershell gallery. Having different versions of the AzureRM.profile and the rest of the cmdlets caused hilarity (well not exactly hilarity but hey ho).

The fix was simple, I just deleted the AzureRM.profile from the shared windows modules folder restarted powershell and I only had to login the once.

I found out what was wrong by doing (Get-Command SomeAzureRMCommand) which gave me the module name of the module, then a quick Get-Module for the module that I was using and a ".Path" on the output for that showed I had a different set of modules for the commands compared to AzureRM.profile which contains Login-AzureRMAccount. Something like:

(Get-Module (Get-Command Get-AzureRMVM).Source).Path

(Get-Module (Get-Command Login-AzureRMAccount).Source).Path

Watching for powershell changes and running Invoke-Pester

  • Posted on: 14 November 2017
  • By: Ed Elliott

It seems like more and more recently I have been writing powershell and typescript rather than c# and t-sql and there are quite a few things to like about the tools for both of these (typescript and powershell). One thing I really like with typescript and javascript in general is that it seems everything has a file system watcher so you can have your code ide, a couple of terminals and all your tests run etc in the background.

I missed this from powershell so I wrote a quick module (literally took about 30 mins so is hopefully pretty simple), if you want to run Invoke-Pester in a terminal whenever your code or tests run then you can now do:

Install-Module PestWatch
Import-Module PestWatch

Nice and simple hey :) If you have any arguments that you want pester to see just add them to the call to Invoke-PesterWatcher and this silently forwards them on for you.

The source code is available:


The module:


Still not sure? Watch the full glory here:


SSDT How To Fix Error SQL17502

  • Posted on: 6 November 2017
  • By: Ed Elliott


If you build an SSDT project you can get an error which says:

"SQL71502: Function: [XXX].[XXX] has an unresolved reference to object [XXX].[XXX]."

If the code that is failing is trying to use something in the "sys" schema or the "INFORMATION_SCHEMA" schema then you need to add a database reference to the master dacpac:

Add a database reference to master:

  • Under the project, right-click References.
  • Select Add database reference....
  • Select System database.
  • Ensure master is selected.
  • Press OK.

Note that it might take a while for VS to update.

(Note this was copied verbatim from the stack overflow question with my screenshots added: https://stackoverflow.com/questions/18096029/unresolved-reference-to-obj... - I will explain more if you get past the tldr but it is quite exciting! )


I like this question on stack overflow as it has a common issue that anyone who has a database project that they import into SSDT has faced. It might not affect everyone, but a high percentage of databases will have some piece of code that references something that doesn't exist.

The question has a few little gems in it that I would like to explore in a little more detail because I don't feel that a comment on stack overflow really does them justice.

If we look at the question it starts like this:

The first thing is the error:

unresolved reference to object

What does this mean? When you write some code in SSDT, a stored procedure, function, foreign key (admit it, you weren't expecting foreign key were you!) SSDT validates that when the code references another object that the other object exists and is usable in that scenario. So if you have a table called "dbo.abc" and you made a mistake and typed "select a from dbo.ab3" then SSDT will notice this and give a warning or an error (more on this later).

After the unresolved reference we have



What is the INFORMATION_SCHEMA? Well, one of the most exciting things about modern RDBMS's (I say modern but basically forever) is that the SQL language specification dictates that the language used by an RDBMS is capable of querying and interacting with the environment as well as the data. Think of an RDBMS as being like a DevOps dream decades before Gene Kim et al. sat down at a typewriter and spilt the Pheonix project onto paper. The INFORMATION_SCHEMA is a way to query things about the database environment. In the example in the question, they are using it to find the number of tables that exist that have a specific name (tip: it will either be 1 or 0).

In SQL Server, the INFORMATION_SCHEMA is in the master database and is special in that you can type "SELECT * FROM INFORMATION_SCHEMA.SOMETHING" and you don't have to type the name of the master database first like "SELECT * FROM master.INFORMATION_SCHEMA.SOMETHING". This is nice as it saves our little typing fingers but also a little hard for SSDT as it means that it has something else to handle.

Moving on we have:

Within Visual Studio, the schema and name for the view are both marked with a warning.

There are two possible outcomes in SSDT when it can't find a referenced object, the default is what has happened here there is a warning, and you get a squiggly line under the code:

The last piece of the question is interesting,

I also tried changing the implementation to use sys.objects instead of this view, but I was given the same warning for this view as well.

The sys schema is similar to the INFORMATION_SCHEMA schema except it is SQL Server (and Sybase ha ha) specific rather than cross-platform as INFORMATION_SCHEMA is supposed to be.

The final thing about the question is it is by someone called Sam with (at today's count) 17 thousand stack overflow points, that is some serious stack overflowing!

The Answers

If we look at the first answer we have:

The answer is spot on and fixes the error for the view Sam is talking about in the INFORMATION_SCHEMA. I like that stack overflow lets someone ask and then answer their own question. He was stuck, he asked, he found the answer and instead of leaving a page without an answer like DenverCoder9, he found the answer and left it for the world:


Answer 1 Comments

The first comment is by the great Peter Schott (B | T):

To give a bit of background on this, Peter has a team where some people have installed Visual Studio to the C drive and some to the E drive, and they were getting errors when referencing the master.dacpac.

What happens with database references is that you get a block of XML that looks something like this added to your .sqlproj file:

When SSDT tries to resolve a reference, it looks at the "Include" attribute on the "ArtifactReference", and if it doesn't find that then it looks in the "HintPath" and failing that I think it looks in the build bin directory, but we won't rely on that working.

This reference is from a Visual Studio 2017 project and you can see that the "HintPath" is not hardcoded to the C drive but instead uses a variable to get to the dacpac. This was not always the case, a while back this was hardcoded using ".." relative paths (WTF!), a full discussion on this pain was:


So this fixes the shared dacpac from different system dacpac's but what if the code we are referencing was in a user dacpac? How do we set the reference and allow people to map their source code folders or build servers to the same projects?

There are two approaches. The first is to put the dacpac's in a known location "c:\dacpacs", "\\dacpacserver\dacpacs" etc. and always use that path. The second option is to check the dacpac's into source control with the solution and referencing using relative paths.

Both ways work, do what is best for you.

The next two comments are:

Veysel chose to copy the dacpac's into the solution folder, yay to Veysel.

Now Orad seems to have got stuck as they have already referenced the master dacpac. As a guess, I would say Orad's problem is either that they are referencing an object in another schema and has the same error but not for the sys or INFORMATION_SCHEMA schemas or Orad has mistyped a referenced object and the warning or error is actually valid.

They should re-check the code is correct. It is possible that they are referencing an object in the INFORMATION_SCHEMA that isn't in the dacpac. If this was the case then Orad could declare the missing object themselves, and it would probably just work, but we would need to keep his reference to master, otherwise, Orad would lose their "create schema INFORMATION_SCHEMA" statement in his master.dacpac.

After Orad's comment we have:

Martin is happy :)

NYCdotnet less happy but they do mention something interesting that there is a setting under "Online Editing". SSDT has a couple of different usage scenarios. The first, and in my opinion, the main reason for using SSDT is the offline editing where you edit T-SQL code offline, build and validate before you get anywhere near any a database.

The second scenario is online where you connect to a database and change the code like an old school DBA with a live production database in SSMS. The setting in this comment talks about the online version and doesn't fix the warning or error in the offline scenario which is probably being mentioned in this question (if this isn't about the offline version then I am going to hang up my technology boots).

The final comment on the first answer is by Scarl:

Scarl I guess either doesn't have an SSDT database project open or the installation has failed, try re-installing SSDT and creating a new database project from scratch.

Answer 2

techfield has suggested this:

In Sam's original question, they mention that they get a warning and the project still deploys but techfield points out that this can cause an error and if you get an error you can't deploy a project as it won't build. This can result in much frustration.

The difference between this error being displayed as a warning which can be ignored (unless you have "treat all warnings as errors" which of course you should but don't) and an error which stops the dacpac from being built is this setting. You can either edit the .sqlproj file and change the XML, or you can go to the properties of the project and tick the box:

Jowen then correctly said you shouldn't turn off errors willy nilly - personally I always check this and enforce it unless I am working with an "inherited" database where it isn't possible but over time make it possible and then check it. When it is possible, I then work towards clearing the warnings if practical.

Final Answer

The final answer is by Russell:

What has happened in Russell's case is that he has a table in a project that is in a .sql file but the build properties of the file "Build Action" is not set to build so although the file exists and the table is valid it is invisible to SSDT. Set the "Build Action" on the file properties in Visual Studio to "Build" and it will be included and the reference can be validated.

Try for yourself

I have setup a GitHub repo to show this error in all of its glory in case you would like to play along at home.

If you would then get yourself a command prompt that can run git and do:

git clone https://github.com/GoEddie/SQL71502.git

I know I called the project the error number, this is like inception in real life. If you go ahead and open the SQL71502.sln solution when you build you should get a warning and also the reference to INFORMATION_SCHEMA should show a blue squiggly line:

Note about the warning, the warnings are only displayed when a file is compiled, if you build then build again without changing a file it won't get re-compiled so all your warnings will disappear! The warning is still there, add a new line or something to the file to cause a rebuild and the warning to be re-displayed. Because of this feature, I sometimes do a visual studio clean and re-build when using SSDT to get a full overview of the issues.

Let's go hardcore and set the project to treating T-SQL warnings as errors. Go and run:

git checkout 1-errors-as-warnings

You will probably need to reload the solution in Visual Studio and then when you build you will get an error instead of a warning, and the dacpac won't build.

To fix the error, we now need to add the database reference to the master database, follow the advice from the stack overflow question above to add the reference and then when you build you should get a successful build. If you didn't want to add it yourself you could run:

git checkout 2-fix-error

When running git checkout, if you have changed any files you will get an error running "git checkout -- ." will reset git and let you switch branches (it throws away any changes so don't do it unless you want exactly that).

With the database reference added we could rebuild and there should be no error or warning:

Now, if instead of a system reference we had a missing user reference, we would get a similar thing with the same error number. Switch to the next branch:

git checkout 3-broken-user-reference

you will see this error:

We fix it by creating the object that it was expecting:

and then it will build successfully:

Happy days.

I hope no one minds we copying and pasting from stack overflow, I guess it is the modern way :)

A virtualized CPU forced me to eat my lunch early, every day, for weeks

  • Posted on: 21 October 2017
  • By: Ed Elliott

I worked one particular contract where I was forced to take my lunch at 11:35 every day, and it was all virtualisations fault!

To set the scene it was a company who wasn't really used to having developers, they had a load of SQL analysts and some mainframe developers but SQL developers writing T-SQL, C# and SSIS code was new to them. The IT management had decided that buying actual computers wasn't necessary for development. We could use the standard PC's and RDP onto a Windows 2008 R2 server with Visual Studio installed and work on that.

I know it doesn't sound ideal but it was ok, the project was exciting, it even involved putting in a continuous deployment pipeline by accident, but we will leave that story for another day.

The setup worked pretty well but for some reason, every day at 11:35, the CPU of my machine went crazy for about twenty minutes. This happened every single day, so after a while, I asked around, and other people were seeing it but being honest I think they were probably used to taking an early lunchtime or maybe it was a late elevensies :).

When something like this happens, I normally break out task manager or "task mangler" as it is affectionately known in certain circles. The thing using the CPU was always the same it was the anti-virus updater. I don't mean an anti-virus scan but just the check for an updated definition.

I don't like this sort of thing, so I spent some time trying to work out what the updater did and when I ran it by myself it was fast, it took me a little while to realise that it wasn't the updater process that was the problem. Getting to this point may or may not have involved me disabling the anti-virus :)

With the updater disabled, I found other things that should not use lots of CPU like notepad without any text in suddenly using 100% CPU for a few minutes. So every day the virtual machine I was using and everyone else's virtual machine was slow enough to cause us to start our lunchtimes early, what the hell was going on?

The next logical place to investigate was the physical hardware, are our virtual machines on the same, hardware and we had a noisy neighbour? Who was it?

Our virtual machines were on different hardware, no cigar :(

In the end, it was the updater but only because the schedule for the updater was the same on every server and the time syncing between every machine was perfect. Every day at the same time, normally fast virtual machines which were happy to share some decent CPU's all wanted to use a little bit of CPU at exactly the same time which caused every physical CPU to run at over capacity which ended up taking ages.

The IT team staggered the schedule for checking for anti-virus updates, and the problem went away, they even inadvertently managed to re-enable my anti-virus, and I went back to taking my lunch time at a more reasonable 12:00 :)

So if you virtualize a CPU don't forget that you are sharing it and if you want to use it all, one person might win but you'll probably both lose.


Where do you install tSQLt?

  • Posted on: 19 October 2017
  • By: Ed Elliott

The question of where to install tSQLt is probably the most common question I get when I talk about unit testing T-SQL, so much so that I thought that it would be a good topic for a blog. I mention tSQLt in the title, but this covers all unit test code for T-SQL.

So to be more specific:

Where do you install unit tests and unit tests frameworks for SQL Server, which databases should have that code in them?

If we look at what databases might exist in a development process:

  • Local developer database
  • CI database on a build server
  • Test database for testers
  • QA database for user acceptance testing
  • Prod mirror or Pre-Production database for a production standby database
  • Production database

If we have tests written in tSQLt and the tSQLt framework itself then where do you install that and those tests and execute those tests?

TLDR: The answer is quite simple, in these two database types:

  • Local developer database
  • CI database on a build server


The first thing to say is, what are unit tests?

There are two top goals for unit tests:

The first is that unit tests are small pieces of code that validate that other small pieces of code work successfully.

The second goal is that unit tests guard against other developers breaking the actual pieces of code covered by the tests - you might well be the other developer in this case so I would always recommend not getting too worked up about this :).

In the list of databases, these two types are the only database types that are private and the type of databases that can typically be thrown away on a whim.

A local developer database should sit on the developer's machine and allow a developer to deploy and test their code locally without affecting anyone else. I have worked on projects with this setup, and it is easily the best to work with as a developer.

If you aren't able to have your database locally but have to either use a remote database or even worse, a shared database for testing, then you will likely get hit by other peoples breaking changes or other people debugging their code in some way blocking you from working.

When you have a shared database you have to coordinate all sorts of things like the best time to deploy and that itself is a massive time sink.

The second type of database, one for a CI build is typically only used for a single build at a time. Because a SQL Server database is, mostly, free to create on an existing server, there is no need to share CI databases. When you think about the fact that you can use Local DB to quickly and cheaply spin up a new clean, empty, fresh, happy database, why wouldn't you use this unless there was an excellent reason not to?

It is in these private databases that we can do things like deleting all the data in a table, setting all the dates of birth to a developer's favourite date or create a long-running transaction that blocks out all of the other users.

Once we move further right in our list of environments, we are less likely to want to do things such as deleting all the data in a table, and once we hit production, it is critical we don't leave test code like that in the database to be accidentally called.

Tests for a production database should be a series of smoke tests which can operate in and around the existing live production calls and data so they do not take extra transactions than the standard database does and they certainly don't modify data, other than their own.

These smoke tests are also likely to be carried out via the application, so a deploy happens and a set of application tests are typically executed to validate that the deploy has been a success - this is a long way from where we use unit tests to validate and guard code against breaking changes.

Anyway, enjoy!

tSQLt Test Adapter for Visual Studio 2017

  • Posted on: 3 October 2017
  • By: Ed Elliott

So Visual Studio 2017 has been released a while and I had created a version of the test adapter (which lets you run tSQLt tests from within Visual Studio with SSDT, ReadyRoll or just plan sql scripts using the test window but I held off on pushing it to the marketplace as it didn't work with Visual Studio 2015.

I spent quite a lot of time trying to work out how to build a single vsix that worked with VS 2015 and 2017 and in the end gave up and now there are two versions, one for 2015 and one for 2017 - I am not putting any fixes into the 2015 version so I would upgrade to 2017 if you want any updates.

I also fixed a couple of issues and one nice little problem with my .net tSQLt client where if you passed in the name of a non-existant schema or test, the tests would return success:


These now report a failure and handle extra select's in the output from tests. Because the test adapter uses the tSQLt client, that gets the benefit of any fixes I make there (isn't hosting individual repos on github awesome for even sharing your own work!)

If you want to use visual studio test window to run your tSQLt tests and have the aweomsness of this:

Then grab it from inside visual studio "Extensions and "Updates" on the tools menu.

Learn how to unit test SQL Server T-SQL code

  • Posted on: 5 September 2017
  • By: Ed Elliott

A free email course on how to use tSQLt including the technical aspects of writing unit tests AND the art of writing repeatable, useful unit tests for even the most complicated T-SQL code

UPDATE: I thought that if I got a certain number by October 1st I would run the course but in two days I had three times the amount of people subscribe than my initial target so I have closed the first course, sign up if you want to join the wait list or the next course!

Unit testing helps us to write better code, make rapid changes to our code and has been generally seen as a good idea for about 10 years. Writing tests for T-SQL code is made much easier by using tSQLt but there is quite a high barrier to entry both in terms of the technical skills in getting tSQLt running and also how to approach large code bases of, sometimes, unfriendly T-SQL code and taming the code with unit tests.

I have successfully unit tested T-SQL code in a number of different environments including clean greenfield environments as well as legacy projects and I have written this course to help people get started with unit testing but also help them to turn unit testing into a part of their development process that they can use everyday to improve the quality of their work and the speed at which deployments can be made.

Are you any of these people?

  • An application developer experienced with other testing frameworks for testing application code?
  • A T-SQL developer with no testing experience?
  • A -TSQL developer with testing experience in other languages or frameworks?

If you are then you should sign up (https://www.getdrip.com/forms/317903840/submissions/new) and let me help you learn tSQLt unit testing for SQL Server.

Why an email course?

I thought it would be an interesting way to provide actionable information regularly and to allow a level of assistance and feedback that I don't think is possible with blogging or writing articles.

How do I sign up?

Run over to: https://www.getdrip.com/forms/317903840/submissions/new and pop in your details.

The course is going to start on the 1st of October and as it is the first one I am limiting the amount of people who can start it. If the first one is a success then I will run it again but it won't be until at least 2018.

What will be the format?

The course will be one email a week which will include an overview of the weeks topic, some detail into the parts that need it and an excercise for the week which can be done on a demo database or any SQL Server database code you have.

and it is free?

yep, gratis. I am not open sourcing yet - maybe in the future but the course itself is free, aka "no service charge"

SSIS ForEach Enumerator File Order

  • Posted on: 4 September 2017
  • By: Ed Elliott

I saw on slack recently a question about how the ssis file enumeraror orders (or more specifically doesn't order) files. I have been thinking about ssis quite a lot lately and whil I am in no hurry to start using it day to day it is quite an interesting tool.

So anyway, I saw this question that went like:

"does anyone know in what order files are processed in 'Foreach File Enumerator'?
I used to think it was alphabetically but after some testing this is not always the case?
Second part is there anyway to specify the order by say size or date?"

So how does SSIS order files or doesn't order files?

The answer to this is pretty simple and I thouhgt I knew the answer but wanted to confirm it. In my mind I thought, "how do they get a directory listing?", and my mind responses "probably using the win32 api's find file etc", my mind then wondered somewhere else before writing a quick package that:

  • 1. Has a ForEach loop and a breakpoint set at pre-execute
  • 2. Has a single task in the ForEach loop and a breakpoint set at pre-execute
  • 3. A variable to hold the file name

Pretty simple, the ssis package looked like:

Pretty simple hey :)

I set the file path for the enumerator to c:\ssisSearch and put a load of files and directories in (because the win32 find functions have a buffer and you need to call it multiple times - I wanted to make sure we covered cases where there were multipl find calls). Then I reached for my favorite tool of all procmon.exe (I say favorite, it used to be then I had a job where I used it literally every single days for hours and hated it so stopped using it but now i'm back with it!). In procmon I set a filter on the c:\cssisSearch folder and also DtsDebugHost.exe and ran my package - the files were returned in alphabetical order.

I then went into procmon and to the properties of the "QueryDirectory" operation on that folder and (when the symbols had loaded) I could see that the call ssis was making was from the ForEachFileEnumerator.dll (native not .net so we can't grab reflector) and that calls "FindFirstFileW".

A quick hop skip and jump to msdn and FindFirstFile discusses the ordering of files here:

The FindFirstFile function opens a search handle and returns information about the first file that the file system finds with a name that matches the specified pattern. This may or may not be the first file or directory that appears in a directory-listing application (such as the dir command) when given the same file name string pattern. This is because FindFirstFile does no sorting of the search results. For additional information, see FindNextFile.

FindNextFile has this:

The order in which the search returns the files, such as alphabetical order, is not guaranteed, and is dependent on the file system. If the data must be sorted, the application must do the ordering after obtaining all the results.

So basically ntfs is alphabetical, fat date but don't rely on either.

Just a final thought, ssis runs on linux so no idea the order there :)

TSQL Tuesday - Databases and DevOps

  • Posted on: 13 June 2017
  • By: Ed Elliott

DevOps isn't running SQL Server in a container and pushing code to it from Jenkins

When we talk about DevOps we envision that we have the ability to check-in code, spin up a new environment, deploy, test and push that code to production and be in the pub at 4.

We know that by having the right tooling in place we can make releases more reliable and more frequent enabling us to deploy the changes that the business want when they want them rather than every x days/weeks/months/years/decades. This outcome is best for everyone, no one loses and the path to fun and profit is that, fun and profitable.

So what do we need to do, run SQL Server in containers and write and deploy our code using SSDT? Yes do it, but you don't need to you can do DevOps and work on doing frequent releases with a standard sql server instance and manually written deploy scripts that are emailed around.

So what is DevOps if you can do it without source control?

DevOps is about enabling frequent releases - that is the core element of it and to enable frequent releases we need:

  • A way to deploy code (a DBA wearing out the F5 key in SSMS is a way to deploy code)
  • A way to be confident about the changes we are about to make (hint tests, lots of them)
  • A way to know when there is a problem with production (monitoring and alerting)
  • The ability to identify bottlenecks, work together and make improvements to the process

The last point is most important, for me it stems from kanban and the kaizen approach of identifying bottlenecks and working together to remove the bottlenecks.

If you look at your existing approach to making changes what are your bottlenecks? How can these be improved? When you deploy changes and they go wrong what stopped you finding out about those problems earlier? When you look at the different stages of a change from business analysis to troubleshooting issues reported by customers, how many of those and how much time and money could have been saved by not having that issue or by identifying it in developer tests or when it was rolled out rather than when the user complained about it.

If you truly start looking at bottlenecks in your entire release process it will more than likely lead you to an end position of a DevOps culture and practices including the tools required to do it but without the underlying kaizen approach, to continually remove bottlenecks in your processes, you will simply pay for tooling you don't need and covering your laptop with stickers but not deliver the value that the business needs.

Which one of these are you?