Learn how to use tSQLt to professionally unit test T-SQL code. To enroll in the free self-paced course visit here

SSDT DevPack + Visual Studio 2017

  • Posted on: 8 May 2017
  • By: Ed Elliott

I have upgraded the ssdt dev pack to support visual studio 2017 and fixed a couple of minor annoyances and have started to think about how to improve it going forward.

The first feature of sorts is the ability to clear the connection used by the quick deployer without having to restart visual studio.

Secondly I am pretty convinced that the main thing people use it for is the quick deploy, the tSQLt test builder and I use the keyword formatters so I have moved everything else to underneath a menu item saying "deprecated" - if anyone really wants one of those features then let me know and I will keep it but I will likely remove them at some point.

I am planning on bringing SQLCover into it at some point and I haven't really been using the mergeui part, I think the better thing is to use sp_generate_merge to generate a merge statement it is much more reliable. If you have a simple table then MergeUi might still be useful to you.

I will publish it to the market place so updates happen automatically, if you want a copy of it go to:

Before I published it I realised that when I wrote the code (a couple of years ago!) that I had taken a couple of shortcuts, one was to always use the 120 version of the parsers instead of whatever the project was set to so I decided to fix that first and then publish - looking over old code is never good ha ha.

It is now published so you should be able to install for vs 2017 and existing installs on 2015 will be upgraded (hope it doesn't break for anyone, there are lots more users than when I last published it!)

T-SQL Code Coverage in SSDT using the SSDT Dev Pack

  • Posted on: 13 January 2016
  • By: Ed Elliott

Code Coverage

What is code coverage?

When you write some code and then test it, how sure are you that you have tested the whole thing? Code coverage gives you an idea of how well tested a bit of code is.

If you have lots of branches in your code (not something I am advocating) it is important to make sure you test it all so we can use code coverage to get an idea of how much of a particular piece of code has been tested (or not).

How code coverage works with SQL Server (i.e T-SQL) is that because SQL provides a pretty good interface for tracking statements which have been executed (extended events) we can tell which statements have been called. So if we take this example stored procedure:

create procedure a_procedure(@a_value int)

if 1=1
select 1

if @a_value = 9


select a, case when @a_value = 1 then 'a' else 'b' end from table_name;


If we execute this stored procedure we can monitor and show a) how many statements there are in this and also b) which statements have been called but we can't see which branches of the case statement were actually called. If it was a compiled language like c# where we have a profiler that can alter the assembly etc then we could find out exactly what was called but I personally think knowing which statements are called is way better than having no knowledge of what level of code coverage we have.

How do you use code coverage

I use it as a way to explore what parts of the code have low test coverage as it means that I need to be more careful about making changes in those areas.

I mostly work with legacy applications rather than new ones (my favorite thing is debugging) so often end up looking at big pieces of T-SQL without any tests which have an air of spaghetti code about them (not always just mostly!) so it is a good way to make sure as I write tests, all the diffent things that the code is supposed to be doing.

How do you not use code coverage

Don't set a requirement that the code must have X % of covered code.


Each piece of code is unique, sometimes you don't really need to test it and sometimes you need to make sure a statement is called at least 20 different ways - code coverage is a guidance and advice tool not a cut and dry sort of a thing.

How do you measure code coverage in SQL Server?

This brings me neatly to the point of all this - you can either do it manually or you can use my new version of the SSDT Dev Pack (groan):

If you grab it from:

In SSDT if you do Tools->SSDT Dev Pack-->Code Coverage" you end up with this lovely little window:

the ssdt code coverage window

If you click on "Start Capture" it will ask to connect to SQL Server, go ahead and put in the server details and choose the database you want to monitor.

When you have done that go and run your test code - this could be a manual script, a tSQLt.Run or even you could run your application - just do whatever you want to see how much of you database is covered.

When you have finished then click "Stop Capture" and the dev pack will then enumerate your entire solution for procedures and functions and find out how many statements there are and it will then parse the results of the extended events trace and find which of the statements in the SSDT projects have been covered. You then get this nice little tree view which shows the cumulative amount of code coverage (statements that were executed as part of the test):

the ssdt code coverage window

In this case we can see that there is one project and one procedure and we have around 62% code coverage (pretty good for a SQL Server project!).

You get ths usual things like double clicking on an item takes you to the document that it exists in but more interestingly we can show the code coverage in the code itself, if you enable "tools->ssdt dev pack-->Display code coverage in Documents" and then click on the procedure in a document window you get:

procedure in ssdt showing covered statements

which is pretty cool if you ask me! What this shows is the statements that have been covered and we can see here that the statement wrapped in the "if 1 = 2" was not called which is lucky (otherwise it would be a pretty poor demo).

Things to note

This uses extended events so you need to have the right permissions to create them (you should probably be using localdb or a local dev instance anyway).

You will need to manually delete the extended events trace files periodically, they are written to the sql log directory and are called CoveCoderage*.xel (and xem on SQL 2008 r2).

If you change the document then I don't know if the offsets will match up so I stop showing the covered statements until you re-run the trace.

If you need any help with any of this give me a shout :)


T-SQL Clippy

  • Posted on: 10 December 2015
  • By: Ed Elliott

I love writing c# code using visual studio and resharper, it is a really nice experience and resharper does a lot of boiler plate stuff and offers re-writes for you that mean you can concentrate on writing the useful bits of code. T-Sql Clippy is no where near as great as resharper but hopefully starts to show what we could actually do with t-sql and ssdt :).

What is it?

There are some things that we can do using the ScriptDom and the rest of the DacFx to help people write better t-sql code, so the things it does today is:

  • Find non-sargable isnull predicates and re-writes them so that they are sargable
  • Find the old style of inequality operator != and re-writes them to <>
  • Find order by statements that use ordinals rather than column names and replace them with column names
  • If you use the query costs highlighter it can show the actual cost rather than just highlighting the text

What does it look like?

the clippy awesomeish action

What you see here is t-sql clippy has examined each statement, come up with a list of things it could do to automatically re-write the code and put a little circle in the margin with the number of things it can do to each statement. If you right click on the circle you see this little menu that lets you change each individual thing or all of the different things in each category. You can see in this example that it can change some != into <>'s, re-write some isnulls so that they are sargable and replace ordinal column numbers in an order by with the column names.

How to enable it?

To enable it there is a tools menu "Tools-->SSDT Dev Pack-->Enable T-Sql Clippy" - it has a check box that shows you whether or not it is enabled. It is also possible to enable it via a config file so that it is always on, ping me if you want the details.


I am basically paranoid about slowing down visual studio so I think I have struck a good balance between showing suggestions in a timely manor and not killing performance, the long and the short of it is (this has been through lots of cycles in my brain) is that it shouldn't slow down using visual studio but as a trade off if you do not see the suggestions you need to scroll the text out of view and back again or press page up (this isn't my idea it is basically how the add-in for visual studio works) - If people find it useful then I will likely work on making it more responsive whilst not sapping the performance but it isn't straight forward.

If you have a super fast box or a super slow box we can control how reactive it is by using the config file %UserProfile%\SSDTDevPack\Config.xml adding something like:

<?xml version="1.0" encoding="utf-16"?>
<Settings xmlns:xsi="" xmlns:xsd="">

Change the CallDelayMilliSeconds to something low if you have a fast box and something high if you have a slow box.

there are some basic things that it does do to help such as ignore all types of files except for SSDT code files so if you do something like add a text file then it probably won't show you any suggestions.

How to get it?

It is now published on the Visual Studio Gallery so head on over to and grab it from there.

Any questions please shout!

Automatically re-write non-sargable isnulls into sargable code in SSDT

  • Posted on: 10 December 2015
  • By: Ed Elliott

"where ISNULL(column, 'DontMatch') = 'Match'" - is kind of a bad thing to do in SQL as it means that any index on column can't be used. It is a really simple thing that is a waste of time, fixing it is pretty simple but I see it a lot and so I thought it would be a perfect candidate for some SSDT Dev Pack t-sql re-writing :).

So the dev pack now has the ability to re-write any non-sargable IsNull statements into ones that are sargable (if you don't know what sargable means and you are a SQL dev, look it up, it is really important!)

So if you have:

select * from dbo.tableaaa
where isnull(a.a_column, 'abc') = 'abc'

it will be re-written to:

select * from dbo.tableaaa
where (a.a_column is null or a.a_column = 'abc')

In fact it turns out that there ar 4 variations on the same theme:

non-sargable type sargable
select * from dbo.tableaaa
where isnull(a.a_column, 'abc') = 'abc'
Equals To the Same Literal select * from dbo.tableaaa
where (a.a_column is null or a.a_column = 'abc')
select * from dbo.tableaaa
where isnull(a.a_column, 'abc') 'abc'
NOT Equals To the Same Literal select * from dbo.tableaaa
(a.a_column is not null and a.a_column 'abc')
select * from dbo.tableaaa
where isnull(a.a_column, 'abc') 'DEF'
NOT Equal To a Different Literal select * from dbo.tableaaa
(a_column is null or a_column 'DEF')
select * from dbo.tableaaa
where isnull(a.a_column, 'abc') = 'DEF'
Equals To a Different Literal select * from dbo.tableaaa
(a.a_column = 'DEF')

The last one is the worst case of where ISNULL is used, converting the column to a value that doesn't match the literal you are comparing it to is just a real waste of time.

What happens?

So if you do tools-->SSDT Dev Pack-->"Re-write Non Sargable IsNulls" then firstly you will be amazed by the catchy name and secondly it will take all isnulls that are in where clauses in an open document and if they:

  • have a column reference in the first parameter
  • have a literal in the second parameter
  • are compared to a literal

then they are re-written following one of the four variations above which boil down to:

  • Do they have an =, or != after the function
  • Do they have the same or different literal in the function and the literal they compare to

Context Sensitive

This is only insterested in isnulls which are used as predicates which stop SQL Server using indexes so if you had this:

select isnull(column, 'default value') from table_name
where isnull(column, 'default value') = 'abc';

then this would be re-written as:

select isnull(column, 'default value') from table_name
where column = 'abc';

The isnull on the list of things to select will be ignored (however if you have a scalar subquery that has a re-writable isnull on the select list this will also be re-written).

What does it not change

It is important where in the isnull is literals or other things, if you don't have a literal as the first parameter but instead do something like:

isnull(upper(column), 'BLAH') = 'SOMETHING'

then I ignore it.

How do I know what has happened

If you look at the output window it will show you the before t-sql and the t-sql that replaces it, if you don't like what you see then just do a ctrl+z to undo.

Protecting yourself

This is a fairly blunt tool to help in one specific case, if you start refactoring code like this then you really need a suite of unit and integration tests to validate that what it has done is correct - no matter whether you make a change manually or using a tool you need tests to validate what you have done is correct.

You also need source control so you can go back and see what changes were made and when.

If you have neither tests or source control then I would get that working before you start refactoring code at all.

What will this look like

I am really interested in exploring how we can use the ScriptDom to do things like understand the meaning of different parts of t-sql code and would love to get to the point where on the document certain things like non-sargable queries are pointed out and potential improvements offered and then performed. I know this is easily possible it is just finding the time to figure out the best way to do it and then implement it.

How do I get it?

i have now pushed it to the visual studio gallery and will use this for all future updates so feel free to grab it from:

Find duplicate indexes in SSDT

  • Posted on: 27 November 2015
  • By: Ed Elliott

Update new download location is

Duplicate indexes, one of the most pointless things you can do to a table that just adds pure overhead. No one likes them so SQL Developers do your DBA's a favour and stop them from ever being deployed :)

This new tool for the SSDT Dev Pack adds a menu item (Tools-->SSDT Dev Pack --> Find Duplicate Indexes) what it does is scan all the projects in the solution (it doesn't follow "this database" references to examine them, maybe a future version) and then print to the output window a list of all the duplicate indexes based on the table, the columns and included columns - I don't check anything else so you might actually want a duplicate in some circumstances but these should be very few and far between.

If you double click on the index it will take you to the location in the code where it actually is so you can delete it :)

If you have no duplicate indexes then you should see "No Duplicate Indexes Found.". Check the output window if you can't see anything.

Anyway, grab the latest version from

Any problems please feel free to contact me and I will help.

Convert select statements in scripts into Inline-TVF's automatically

  • Posted on: 25 November 2015
  • By: Ed Elliott

Update new download location is

In-line TVF's are awesome, they are a great way to reuse T-SQL code without the performance penalties of multi line TVF's or horror of horrors Scalar Functions. Taking select statements and turning them into in-line TVF's is actually really really simple using the TSql ScriptDom so here is another tool for the SSDT Dev Pack that lets you highlight a select statement in SSDT and generate a TVF and replace the statement with a call to the TVF.

It isn't perfect, I totally ignore parameters and if you have a join and only want to pull out one of the statements you will need to do it manually but this will help in other situations and be a good start by creating a TVF using the correct form (returns table as select ...).

So for example if we have this code:

highlight select statement in SSDT

If you choose Tools-->SSDT Dev Pack-->Extract into TVF (or do the right thing and map CTRL+K,CTRL+E) and you are given the following dialog which lets you give the function a name and a location you want it to be created in SSDT:

choose a name and location for the new tvf

Finally, the TVF is created and the original statement is replace with a call to the function (whoop whoop):

the tvf is created and the original statement changed to call the tvf

Hopefully someone finds this useful, I know it is good for me :)

To grab the devpack get it from:

Changing case of T-SQL Keywords

  • Posted on: 24 November 2015
  • By: Ed Elliott

Update new download location is

Bert likes his T-SQL keywords in UPPERCASE, Ernie likes his in lowercase which causes some internal struggle as it is actually one person "Mr Bert Ernie" anyway I digress, whether you are working by yourself or in a team you sometimes want to change the case of all the keywords in some code.

If you are using SSDT you can use my new SSDT-DevPack tool to do it for you:

If you grab the devpac, any T-SQL keywords in the active document will be made UPPERCASE when you do "tools->SSDT Dev Pack-->Upper Case Keywords" and I will leave it up to your imagination as to what happens when you do "tools->SSDT Dev Pack-->Lower Case Keywords".

As always, you can map these to keyboard shortcuts, I use Ctrl+K,L for lowercase and ctrl+k,u for UPPERCASE.

Any problems please shout, grab the installer from:

Enjoy :)

Quickly deploying procedure/functions from SSDT (Bypass Publish)

  • Posted on: 24 November 2015
  • By: Ed Elliott

SSDT is great but one thing that is not so great is the performance of the publish. I am not complaining, I understand it does a lot and is well worth the wait when publishing however, when you are writing and testing code having this process really kills productivity:

  • 1. Write code
  • 2. Build projects
  • 3. Publish projects
  • 4. Wait for publish to complete
  • 5. Check publish messages for errors
  • 6. Test your changes

If you develop like that you will be very slow so instead I have added a "Quick Deploy" to SSDT - it is nothing new, I had one before and Dave Ballyntyne had one ( but I wanted to do a couple of things to mine, firstly add it to the SSDT-DevPack (whoop whoop) and secondly make it so you do not have to save the file to deploy any changes so it uses the Visual Studio API's to get the contents of the current document and deploy that to a database.

What this is not

It is not a deployment system for dacpacs or SSDT projects, it is purely to deploy table valued functions and procedures quickly by bypassing the SSDT build system. If you use this, when you are happy with your changes you should absolutely 100 percent build, publish and test the published work as because this is a really simple way to bypass the build system but there is no error validation etc.

How do I use it?

Open a document with a table valued function or procedure, click tools--> SSDT Dev Pack --> QuickDeploy or use the keyboard options in SSDT to make Tools.QuickDeploy to a key combination (I use Ctrl+Q, Ctrl+D but it is a little awkward) and your code will be deployed, any messages are added to the standard output window.

If the active document has anything other than stored procedures or table valued functions then they will not be deployed and also if you have more than one procedure of function, all of them in the active document will be deployed.

Where do I get it?

As always see for how to download it

SSDT DevPack - Highlight expensive queries

  • Posted on: 22 November 2015
  • By: Ed Elliott

When developing stored procedures in SSDT it is important to know when you have written a query that is potentially slow or has a high IO or CPU cost but there is no real way other than examining the the tables you are querying and understanding the data volumes and indexes whether they could be a problem.

The idea of this new tool is to highlight in either yellow or red (yellow is quite expensive, red is really expensive) any statements that have a high cost. This isn't an actual profiler so it is important to understand what it does and its limitations.

What does it do?

When you enable the query cost for a document (I map it to the keys ctrl+k, ctrl+q) what the tool does is connect to a SQL Server instance and run the stored procedure using "SET SHOWPLAN_XML ON" so it isn't actually executed but the estimated query plan is returned and the cost of each statement checked to see how high it is.

By default high statements must have a cost over 1.0 to be marked as high and anything over 0.2 is marked as a warning - you can override these with this in your "%UsersProfile%\SSDTDevPack\config.xml" :


To turn it off again just toggle the costs off (either map some keyboard short-cut to "Tools.ToggleQueryCosts" or do Tools->SSDT Dev Pack->Toggle Query Costs. To stop constantly going back to run the query against the server, I check the checksum of the file so to get a new set of costs make sure you save the file. If you get any issues there is a menu item "Clear Query Cost Cache" which clears all the cache's and you can start again merrily.

Two things to bear in mind:

  • If you don't deploy the code I can't get an estimated plan so if it isn't doing what you expect then make sure you do a publish
  • It isn't an actual profiler so if you do something like use cursors the statements may have low individual costs but actually be slow and expensive in real life

What does it look like?

Here is a screen shot, feel the awesome:

highlight expensive queries in stored procedures in ssdt

Finally, I find it useful so hopefully someone else will as well :). This will probably be the last tool I add to this for now as I want to tidy up the ones that I have already done.

Download it from:

Create stub tSQLt tests in SSDT from stored procedures

  • Posted on: 17 November 2015
  • By: Ed Elliott

I have released another tool as part of the ssdt dev pack, what this does is create tSQLt classes and stub tests. If you open a stored procedure in ssdt and then do "Tools--> SSDT Dev Pack--> Create tSQLt Class" or you can create a keyboard mapping in "tools->options->keyboard", what I do on my machine is map "Tools.CreatetSQLtClass" to Ctrl+K Ctrl+S - it will create a schema with the name of your stored procedure and the correct extended property for tSQLt to register it as a test class.

If you then again either do "Tools--> SSDT Dev Pack--> Create tSQLt Test" or as I have, map it to Ctrl+K Ctrl+T - it will create a tSQLt test but with any tables referenced in the procedure added as fake tables, any parameters the procedure requires defined and an exec on the procedure.

This will build test stubs for stored procedures and table valued functions today, there is no reason not to support other things like scalar functions, I just needed those two first.

If you have the old version installed you can just install the latest vsix and it will upgrade it ( You will need to restart visual studio though :(

This doesn't add the reference to tSQLt so you will need to continue to do that as you do it today.

To build the tests I need the compiled dacpac so if you add a procedure or change it make sure you build the project before trying to add the test otherwise you won't get the correct fake tables (or anything at all).


If you have this code:

create procedure [dbo].[get_customer]( @customer_id int)
select * from customer where Id = @customer_id

and this project structure:

note you do need a folder to put the tests into, they can't go into the root of the project.

Choosing "Create tSQLt Schema" with the procedure open causes this dialog to pop up:

If you choose the target folder, in this case if I choose "Tests" then it will create sub-folder under that called "get_customer" where the schema will be created. You can change the name of the schema but if you do the tests will not be created in the correct place - I will fix this in a later version or feel free to do it yourself via a pull request. To manually work around this just rename the schema of the tests when they have been created.

If you click on "OK" the new folder and schema will be created:

This is the way I like to structure my tests, I have a folder where I put the schema (class) and all the tests, it seems a logical way for me to group the tests - if someone has a different way then let me know and I will see how easy it is to incorporate it.

If you then go back to the test and choose "Create tSQLt Test" (or use the shortcut you map) then again you will get the similar dialog to the schema, choose the new folder that was created and the test will be created there for you. You can override the default test name then click OK and the new test should pop up in Visual Studio:

The test proc will look something like:

CREATE PROCEDURE [get_customer].[test get_customer does something]
EXECUTE tSQLt.FakeTable 'dbo', 'customer';
DECLARE @customer_id AS INT = 0;
EXECUTE [dbo].[get_customer] @customer_id;
EXECUTE tSQLt.AssertEquals 'TRUE', 'FALSE', N'Error Not Implemented';

Every table referenced will be faked, there are probably other things we should fake but this feel like a good start to help with some of the overhead that tSQLt tests have.

I hope this is useful and I intend to keep adding to this so if you have any ideas or bugs let me know!

If anyone wants any help with testing or SSDT in general feel free to give me a shout!


I have released a new version of the dev pack, grab the latest one from:

For full details see and



Site Search with Duck Duck Go