Ed Elliott's blog

tSQLt-Course

Learn how to use tSQLt to professionally unit test T-SQL code. To enroll in the free self-paced course visit here

ARMED - Where is the source?

  • Posted on: 9 December 2017
  • By: Ed Elliott

ARMED, where is the source?

For quite a few years I’ve been putting everything I wrote on github and made most of it open source with a permissive license like MIT or Apache2. If anyone asked then I would be happy to change any licenses, whatever they wanted but with this new extension I haven’t put it on github.

Basically it is freeware, I wrote it in TypeScript which compiles to JavaScript so if you want the JavaScript you can just read it, I prefer reading and debugging TypeScript but as it isn’t compiled to machine code the extension effectively ships with the source.

I’m not putting the repository on github for two reasons, the first is that it didn’t actually take very long to write. I took the json outliner sample and added to it by writing my own set of functions that arm template support like if, equals etc which are just wrappers around the normal if and ==‘s operators.

If anyone wants to write their own version it won’t take you long, I wrote the graphing stuff in about 3 hours one evening.

If you need to debug it, you have the JavaScript source and vscode lets you open the chrome debugger and debug like a boss.

The second reason is that I’ve had pr’s for projects before where I’ve been busy and not focusing on them at the time and ended up merging things I now wish I hadn’t.

The tools I wrote are because I want a tool to do something, if 1 person or 100 people use them then great. In fact, in many ways , I would prefer not many people to use them. My ssdt dev tools addin for visual studio/ssdt has over 10k downloads if even 10% of those are active then I could potentially break 1,000 visual studio instances with every release.

I don’t have the time or resources to test all the different possible combinations of visual studio and windows etc to be able to have a great deal of certainty about a release so releases slow down to only the necessary.

This isn’t about writing tests this is testing whether windows 7 sp 1 with multiple combinations of the c run times, .net framework and DacFx all work with the dll’s I write and it would take a lot of work to build and test this stuff.

So, use the tools or don’t. If you do and you get stuck reach out to me and ask for help. Sometimes I can but sometimes I’ll be busy and can’t.

I have created a github repo to let people raise issues and put some docs but there is no code there:

https://github.com/GoEddie/armed-docs

If you want the source just ask and I’ll send it to you, you can copy etc and create your own projects but I don’t think I’ll accept any changes, not right now anyway.

If you want better tooling, write it, share it :)

Tags: 

ARMED - ARM resource dependency graph inside VSCode

  • Posted on: 8 December 2017
  • By: Ed Elliott

I think the title sums it up, I have published a new version of ARMED the arm helper extension for VSCode.

This version includes the ability to see a graph of the resources and how they relate to each other:

There are a couple of limitations at the moment this knows about child resources and also resources referenced via dependOn, if you have an implicit reference using "resourceId" then I don't pick it up (yet).

Tags: 

ARMED - Azure ARM VS Code extension

  • Posted on: 6 December 2017
  • By: Ed Elliott

I have been working with ARM templates for a little while and have found it really annoying to test functions inside the templates, you see if you have an ARM template it is a JSON document made up of lots of key/values like:

{"name": "a_name"}

but it is slightly more complicated because there is a set of functions in a javascript like language you can use in the values so you might have:

{"name": "[concat('a_', 'name')]"}

which effectively does the same thing, now because ARM templates make lots of use of these template functions it is hard to write them and see if they are correct before deploying so I wrote an extension for vs code (https://marketplace.visualstudio.com/items?itemName=ed-elliott.azure-arm...) which does two things (at the moment), the first is an outliner similar to the visual studio (fat version) and more excitingly, it includes a tool to test these functions so when you have something complicated like:

"name": "[concat(parameters('servicesNames')[copyIndex()], 'pip')]",

and hoping you have the right parameters, you can use the vs code add-in to see what it is actually going to be before you send it off to Azure.

If you want to see it in action, check this:

Any issues shout or raise an issue https://github.com/GoEddie/armed-docs/issues

Azure Virtual Machine + Premium Disk Throughput Calculator

  • Posted on: 2 December 2017
  • By: Ed Elliott

I keep having to refer to the virtual machine size page and the disks pricing page to work out how best to stripe disks for whichever type of virtual maching in Azure to work out what sort of throughput we can get so I thought I would save myself some ink and automate it, hopefully someone else finds this useful.

To get to the calculator see:

http://the.agilesql.club/WhatTheIO/index.html

What does it do?

You choose how many volume you want, whether you want caching or not and then how many disks choosing the amount and type of disks and then the calculator will tell you what your max throughput is and also when you have exceeded the maximum throughput of the virtual machine.

What does it look like?

Like this:

You can switch between vm types, in this example we can see that a single P20 exceeds the total throttle limit of the E2s_v3 so striping the disk or using a faster disk is pointless (unless we are using it just to get more space rather than performance).

Let me know if you get any issues, I've only tested on chrome and will probably add to it in the future, maybe adding in non-premium disks but for the moment it is just premium disks and vm's that support premium disks.

The calculator itself is available:

http://the.agilesql.club/WhatTheIO/calculator.html

The instructions:

http://the.agilesql.club/WhatTheIO/instructions.html

ed

Fixing SQL AG Routing "Cannot find host" using packet analysis (and a spell checker)

  • Posted on: 30 November 2017
  • By: Ed Elliott

I was setting up an availability group listener recently and when I tried to connect to the listener I got an error message to say "The host cannot be found" after the usual 15 seconds connection delay.

I checked the usual thing like TCP was enabled on the replicas and that routing was configured, but every time I tried I kept getting the "The host cannot be found" error. Now, I don't know about you, but I find error messages like this completely infuriating especially when I knew the hostname of the listener resolved because I could ping the damn thing (and then connect straight to the IP address.

Now with a listener, you connect and then either connected or you are routed to one of the other hosts. I knew I could connect so guessed that I was being redirected to somewhere else but I didn't know where. Of course, if the error had just said "The host (hostname) cannot be found" I wouldn't be writing this today :)

So I decided what any self-respecting lover of network packets would do, I opened a command prompt google'd "netsh trace start command line not rubbish thingy" to get to this blog: https://blogs.msdn.microsoft.com/canberrapfe/2012/03/30/capture-a-networ... and ran "netsh trace start capture=yes tracefile=.\path\to\trace.etl".

When the trace was running, I quickly reproduced the error and then ran "netsh trace stop" and copied the etl file to my laptop where I had "Microsoft Packet Analyzer" installed. A quick hop skip and a jump over to the etl file in the packet analyzer tool and I wanted to see if I could see a failed DNS request so I went to the "Filter" window and added the word "dns" then clicked "Apply":

This did show a request for the wrong machine in the wrong domain, but how did it get there? When I saw this I went to SQL and the read-only routing URL in sys.availability_read_only_routing_lists and of course it was totally wrong. That was the fix, but not the end of this story dear reader.

Now I had a network trace of routing in action I thought to myself, as I am sure everyone who has ever worked with SQL Server has thought at one stage or another: "I wonder if the routing is handled via a TDS ENVChange token?".

I changed my filter from "dns" to "tds" and then I could see all the tds traffic:

I also changed the columns to include source and destination ports so I could see which packets were from the client and which were from the server. Traffic to the server will have a destination port of 1433 (unless you are mental and change it or aren't mental but are using an instance), the source will be different for every connection.

Looking at the source / destination pairs I could see there was more than one set of connections intermingled so I guessed mine was the one with the source port of 52264 as 62084 was sending SQLBatch tds packets and I wasn't getting that far, so a change of the filter from "tds" to "tcp.port == 52264" and we have a trace with just our traffic:

So what is all this? Well, a SQL Server connection goes like this (forget about silly things like local connections and named pipes, this is just TCP connections obviously):

1 - Create a TCP connection:

Client - Hi Server, I want to create a TCP connection to port (probably) 1433
Server - OK, go on then
Client - Thanks, I will then

The "S" in the "Flags" part of the summary for each packet is short for SYNCHRONIZE and it is TCP's way of saying Hello, the client sends a S, the server responds to the S with an A for ACKNOWLEDGEMENT and sends its own S for SYNCHRONIZE, the client then hilariously replies to the server's S with its own A. To put it another way:

"The SYN, SYN ACK, ACK" TCP handshake.

2 - Create a TDS connection

Now this is where life jumps into the fact lane, go back to google grab the [MS-TDS] spec (https://msdn.microsoft.com/en-us/library/dd304523.aspx) and read it :). Done? Good so you know the sequence to create a TDS connection over the previous TCP connection is:

First the client sends a PRELOGIN packet, this is unencrypted and works with basically any decent version of TDS from 4.2 upwards.

After the PRELOGIN packet, we get some TLS traffic so SQL can encrypt the LOGIN7 packet. This can be "man in the middle'd" unless you have "Trust server certificate" set to false, so if your not on a LAN make sure you always do that.

We'll skip the TLS setup stuff and can't see the response to the LOGIN7 packet as it is encrypted but we do see the response to the LOGNI7 packet which includes some NTLM setup guff which we don't care about here.

Once the connection is validated and the user logged on, we get the response to the login:

Now what happens in a login response is SQL says lots of nice things to us like "your database is now XXX" and "your language is XXX". It does this by sending a series of tokens that are the ENVCHANGE (environment change) token and also an INFO token which is the message you see written to the "messages" window in SSMS.

If we look at some of the tokens we see our first token is an ENVCHANGE with a token type of 1 = DATABASE which is a change database context token:

I'll leave you to go through any other TOKENS you like and jump onto the last token in the packet which is again an ENVCHANGE but packet analyzer doesn't recognise the TOKEN which is of type 20 (0x14). Jump back to the [MS-TDS] docs for ENVCHANGE TOKEN 20 and we see:

HOOPLA!, token 20 is an ENVCHANGE token that routes the client from one host to another, if we examine it we see the name of the server we will be redirected to:

Anyway, now to put the correct domain name in the routing list!

PS, Just to finish off the connection we can see the F flag sent from the client, the server responding with its own F and an A and then the client sends its own A and then nothing else happens. I will leave you to figure what this last bit was about and for bonus points have a look at why people don't use R or RESET (or RST) to close connections!

"the input device is not a TTY" error running docker on VSTS build agent

  • Posted on: 22 November 2017
  • By: Ed Elliott

I have been playing around with including all my build dependencies in docker for windows lately and had a build that kept giving me this error in the build log:

"the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'"

The command I had been running was along the lines of:

docker exec -it powershell.exe ./some/script.ps1

The problem was that I was using -it which means "interactive" and "create a virtual TTY", to run my script I needed neither of those so took them off. The command then failed and I could see the actual error message which allowed me to find the real problem, instead of seeing a message about the TTY because the docker exec command failed for some other reason (container not running, powershellexe instead of powershell.exe lol).

Anyone hope it helps someone, if you get this error then the root cause is something else, and you don't need -it on docker exec when you run exec a script in a docker container as part of a build.

Tags: 

How do I use VSTS variables in YAML CI Build definitions?

  • Posted on: 21 November 2017
  • By: Ed Elliott

Hey, VSTS YAML builds are my new favorite thing, by like a million miles.

If you have a yaml build definition ( .vsts-ci.yml ) and you want to use one of the build variables then it isn't totally clear how to include them in a definition (it hasn't been out long so I expect more docs to come soon), while we wait for the docs if you want to use one of the variables from https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions... use this format:


$(Build.SourcesDirectory)

Simple! On the page https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions... each variable has two definitions, use the top one so Build.Thing rather than the upper case single word like BUILD_THING.

ok?

Tags: 

You have your database in source control now what?

  • Posted on: 16 November 2017
  • By: Ed Elliott

This post is for a specific type of person if you are:

  • New to source control
  • Are getting started on your path to the continuous delivery nirvana
  • Have been able to get your database into some sort of source control system
  • Have more than one person checking in code to the database source
  • You are unsure what yo do next

Then this post is for you!

Choosing a source control system and tools to manage your database code is a great hurdle to have got past, well done!

Actually getting your database into that source control system is another difficult hurdle to get last, double well done!

Now you should be starting to think about the next steps, and the sort of things that are on the horizon is, not immediately but not too far away (1-6 months):

  • Generating deployment scripts
  • Provisioning test databases to deploy to
  • Writing some tests
  • Automating the running of those tests
  • Using a build server to do all this for you
  • Sacking your dba's (jokes ha ha)

But what about now?

The thing that I would do now is to:

  • Determine your branching and release strategy
  • Start checking in and merging changes
  • Generate/gather deploy scripts
  • Nothing else

These things are so important to get right early on. It is hard to choose a correct system until you are where you are now. Until you have chosen a process and started using it, it is hard to know whether it works for you.

Determine your branching and release strategy

How are you going to manage your code while changes are going on? Typically you will want to be able to:

  • Check new features into the code repository
  • Check-in hotfixes to the current production and not lose those changes when the latest features are checked in
  • Allow multiple developers to check code in at the same time

There are a few different approaches, and my favourite is the git approach of creating a branch for each feature then merging back to master when the feature is ready to go to production but this favours getting changes out to production pretty quickly so you may not be ready for this yet.

Have a look at different branching strategies that your source control system uses and decide on one that you and your team can understand and work with.

When you have decided on your branching strategy, stick to it and make sure everyone follows the system. If you have had no source control and suddenly have source control it takes quite a lot of discipline as a development team to ensure you follow the process. Someone will forget to check something in, and someone will skip the process. Keep on eye on check-ins, make sure everyone follows the process - it is a big change in how SQL developers work so understand that this in and of itself is a major change for your team.

Start checking in and merging changes

Checking in code is the next step and having multiple developers means that you will have to start merging each other’s changes. Let the developers check in code and merge their changes, try out different tools for merging changes. If you have TFS, you probably have visual studio which has a decent merge tool built into it. If you are using git look at SourceTree or git Kraken.

You will get problems when you merge your changes, do the first few merges together and see where doing things like reformatting long stored procedures causes extra changes that are more challenging to deal with when other smaller changes to the procedures are merged.

Generate/gather deploy scripts

The next thing you will want to do is start to see some value from all this work, and I’m not suggesting that you start pushing all your changes to production yet (you haven’t even any tests yet!). Whatever type of tool you change chosen (automatically generate scripts/manage migration scripts etc.) find a way to generate those or gather them together, so you no longer have to spend x hours every release “gathering the scripts”. This starts to show value and has the benefit that people can’t bypass the source control system easily.

Nothing

I don’t mean actually nothing, get on and do some actual work (you lazy little...)! What I mean is nothing more on your ci/cd process for now. You have a good 1-3 months work to do to perfect what you have, to work and get this bit nailed :)

Running any AzureRM powershell command tells you to "Run Login-AzureRmAccount to login." even though you are logged in!

  • Posted on: 15 November 2017
  • By: Ed Elliott

Every AzureRM command I was running I would get an error message telling me to login, I then did a login, checked that I had the right subscription and I was still getting the error message - after logging in 7 times I figured that maybe there was something else wrong :)

It turns out that I had originally installed the Azure RM cmdlets via the Azure SDK and had somehow also managed to install the AzureRM.profile module from the powershell gallery. Having different versions of the AzureRM.profile and the rest of the cmdlets caused hilarity (well not exactly hilarity but hey ho).

The fix was simple, I just deleted the AzureRM.profile from the shared windows modules folder restarted powershell and I only had to login the once.

I found out what was wrong by doing (Get-Command SomeAzureRMCommand) which gave me the module name of the module, then a quick Get-Module for the module that I was using and a ".Path" on the output for that showed I had a different set of modules for the commands compared to AzureRM.profile which contains Login-AzureRMAccount. Something like:


(Get-Module (Get-Command Get-AzureRMVM).Source).Path


(Get-Module (Get-Command Login-AzureRMAccount).Source).Path

Watching for powershell changes and running Invoke-Pester

  • Posted on: 14 November 2017
  • By: Ed Elliott

It seems like more and more recently I have been writing powershell and typescript rather than c# and t-sql and there are quite a few things to like about the tools for both of these (typescript and powershell). One thing I really like with typescript and javascript in general is that it seems everything has a file system watcher so you can have your code ide, a couple of terminals and all your tests run etc in the background.

I missed this from powershell so I wrote a quick module (literally took about 30 mins so is hopefully pretty simple), if you want to run Invoke-Pester in a terminal whenever your code or tests run then you can now do:


Install-Module PestWatch
Import-Module PestWatch
Invoke-PesterWatcher

Nice and simple hey :) If you have any arguments that you want pester to see just add them to the call to Invoke-PesterWatcher and this silently forwards them on for you.

The source code is available:

https://github.com/GoEddie/PestWatch

The module:

https://www.powershellgallery.com/packages/PestWatch/

Still not sure? Watch the full glory here:

Enjoy!

Pages

Site Search with Duck Duck Go