SQL Server Snippets

Yesterday, working with a colleague the topic got steered towards the use of snippets in SQL Server 2012+. My reflex was to let my colleague know that my blog is wonderful source for all things SQL Server. Much to my chagrin, for all of the snippets I have, I have written a blog post about it.

Visual Studio has had code snippets for a decade now, but SQL Server only got them a version and a half ago. I might imagine that having templates was the excuse given for not pursuing them but I am glad we have them now. The tools menu will lead you to the Code Snippets Manager… from there you can find the path to built-in snippets. These provide good fodder for creating your own.

There are two types of snippets. Expansion snippets insert the code snippet where the cursor is. Surround snippets wrap code around highlighted SQL. Both have arguments that allow you to tab through variable portions and update multiple references in a single place. Let’s take a look at a simple expansion snippet.

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<_locDefinition xmlns="urn:locstudio">
    <_locDefault _loc="locNone" />
    <_locTag _loc="locData">Title</_locTag>
    <_locTag _loc="locData">Description</_locTag>
    <_locTag _loc="locData">Author</_locTag>
    <_locTag _loc="locData">ToolTip</_locTag>
	<CodeSnippet Format="1.0.0">
			<Title>Create Temporary Table</Title>
			<Description>Creates a temporary table.</Description>
			<Author>Thomas W Marshall</Author>
                                	<ToolTip>Name of the Table</ToolTip>
                                	<ToolTip>Name of the Columneter</ToolTip>
                                	<ToolTip>Data type of the Columneter</ToolTip>
                                	<ToolTip>Name of the Column</ToolTip>
                                	<ToolTip>Data type of the Column </ToolTip>
			<Code Language="SQL"><![CDATA[

IF EXISTS(SELECT name FROM tempdb.sys.objects WHERE object_id=OBJECT_ID('tempdb..#$TableName$') AND type = N'U') DROP TABLE #$TableName$;
    $Column1$ $Datatype_Column1$,
    $Column2$ $Datatype_Column2$


You can copy the above XML into a file with the extension .snippet and save it in the Code Snippets\SQL\My Code Snippets directory. You should then see it in the Code Snippets manager. Alternatively, you can import a .snippet file using the Code Snippets manager. Once the snippet has been installed whenever you strike the expansion chord CTRL+K,X in SQL Server Management Studio you will see it in the My Code Snippets folder of the fly-out menu.

In the above XML document we can see a field in the declarations for each of the variable fields. Once you have chosen to insert the snippet those declare fields are linked by there name appearing between $’s. This allows you to change the value in one and have all instances of that name update. You can also tab to the next name.

Featured Image -- 361

You’ll never hear successful people say these 15 phrases

Originally posted on Fortune:

This post is in partnership with Entrepreneur. The article below was originally published at Entrepreneur.com.

By Sujan Patel, Entrepreneur.com

If you want to become more successful as an entrepreneur or in your career, you can start by making a habit of talking and thinking more like the people you know or read about who are already successful.

[fortune-brightcove videoid=3867034492001]

Here are some phrases you’ll never hear a successful person say:

1. “We can’t do that.”

One thing that makes people and companies successful is the ability to make solving their customers’ problems and demands their main priority. If a need arises repeatedly, the most successful people learn how to solve it as quickly as they can.

2. “I don’t know how.”

Instead of automatically shutting down solution-finding, successful people learn what they can in order to succeed in a project or in their career. For example, you would never…

View original 946 more words

Featured Image -- 359

How successful people work less—and get more done


7 of 10 isn’t so bad.

Originally posted on Quartz:

This post originally appeared at LinkedIn. Follow the author here.

As co-founder of Hotwire.com and CEO of Zillow for the last seven years, 39-year-old Spencer Rascoff fits most people’s definition of success. As a father of three young children, Spencer is a busy guy at home and at work.

What’s the one thing that Spencer refuses to do on the weekend? Work—at least, in the traditional sense. Rascoff says:

I never go into the office on weekends, but I do check e-mail at night. My weekends are an important time to unplug from the day-to-day and get a chance to think more deeply about my company and my industry. Weekends are a great chance to reflect and be more introspective about bigger issues.

new study from Stanford shows that Rascoff is on to something.

The study found that productivity per hour declines sharply when the workweek exceeds 50…

View original 1,120 more words

Requirements Analysis

In an earlier post we started generating prime candidates testing only numbers of the form 6k+1 or 6k-1 less than or equal to the square root of the next integer. The reason we were doing this is that all prime numbers greater than 3 are of the form 6k+1 or 6k+2 and we only have to test modulo test primes. The reason we only have to test primes is that the Fundamental Theorem of Arithmetic indicates that all non-prime numbers greater than one is the product of primes.

What that means is that we only have to modulo test prime numbers. Of course in pursuing that particular optimization we actually neglected to test first whether the next integer we were testing was of the form 6k+1 or 6k-1. The point of this post is to highlight how in the course of developing SQL (or any other code for that matter), is how we can read our requirements in a number of ways and will need to step out of it and reconsider whether our first reading was complete or even correct.

Consider the following statements: “We only need to modulo test primes for a given integer.” and “We need to test whether or not their exists a prime factor for a given integer.” These two statement mean essentially the same thing. When combined with the form of primes being 6k+1 or 6k-1 led us to skip testing the integer itself for being of that form, but assume we have already filled a prime table to a certain number and are testing the prime candidates. Those two statements can lead us to two very different approaches to SQL.

The first statement: “We only need to modulo test primes for a given integer.” could lead us to write something like:


SELECT TOP 1 1 FROM prime_candidates pc INNER JOIN primes p ON pc.p % p.p = 0 WHERE pc.p=@n

Whereas the second statement: “We need to test whether or not their exists a prime factor for a given integer.” might lead us to write something like:

SELECT TOP 1 1 FROM primes p WHERE @n % p = 0

Since are requirement was to test for prime factors of a given integer and not to produce a set of prime factors performing the non-equijoin produces a less efficient execution plan.


Seeks that lead to lookups are usually more efficient than scans. Of course unnecessary operations are inherently inefficient. In either case it is important to test that we do in fact get the results we expect.

Naming Conventions in SQL Server

It seems that everyone has an opinion on naming conventions. Microsoft even has a few recommendations. You would think with 256 (or 128 or 30) characters it would be simple to devise an apt system for describing database objects.

SELECT TYPE_NAME(system_type_id) [base_type], max_length FROM sys.types WHERE name = ‘SYSNAME’;

Of course, we might not want to use our screen real estate on object explorer. We also might not want to use that real estate up for a single column name (especially if that column was a bit flag). Fortunately, we have options. In addition to being able to use management studio to filter objects, we can us schemas to group objects and alias data types to categorize.

An example case for using schemas might be to create separate schemas for staging updates and deletes in a CDC based warehouse loads. An example for using alias data types might be differentiate between date types that are application data and date types that are row level metadata (e.g. inserted or updated date).


Earlier this year I posted a series of posts about prime numbers to demonstrate coding principles and techniques. As recursion is typically a very efficient approach and in my last post on the subject I stated that we could create a stored procedure to call our stored procedure recursion that pulls from our function recursion. Of course as a parent procedure that procedure would contribute to the maximum procedure nesting of 32. That approach would look something like this:

-- =============================================
-- Author:		Thomas Marshall
-- Create date: 4/23/2015
-- Description:	
-- =============================================
CREATE PROCEDURE iteratingParent3100PrimeTestCalls 
	 @max_iterations INT = 0,
	 @demo_override INT = 2
	DECLARE @n INT = 1;
WHILE @n <= @max_iterations
	EXECUTE [dbo].[thirtyTwoHundredPrimalityTests] @demo_override --2
	SET @n += 1; 

With the default value for the max iterations parameter nothing will happen. If we pass the demo override parameter a value of 1 or less we will get an error because our procedure will nest more than 32 levels. As you may know iteration is dramatically less efficient than recursion. If you didn’t already guess from the title of this post we are going to try and get more out of the recursive part. What’s great is just how easy it is.

First we are going to make a variation on our recursive CTE function. You’ll notice the only difference from the hundred more prime tests function besides the name is that we restrict our CTE to 2 to the power of 15 rows. It’s important to note that the query inside the function will throw an error without OPTION (MAXRECURSION 23767) and the create statement will error with it. It’s tangential to note that (2^15)-1 is the largest signed 16 bit integer and the zip code for Paisley Florida.

CREATE FUNCTION [dbo].[sixteenBitsMorePrimeTests](@n int)

WITH recurseMore AS (
SELECT @n AS p, 1 AS n 
SELECT p + 1 , n + 1 FROM recurseMore
	WHERE n < 32768
SELECT p, n FROM recurseMore
WHERE dbo.intIsPrime(p) = 1



Fortunately for us we can specify the OPTION (MAXRECURSION 32767) in the call to the procedure so we can avoid erring on the recursion.

-- =============================================
-- Author:		Thomas W Marshall
-- Create date: 
-- Description:	
-- =============================================
CREATE PROCEDURE [dbo].[thirtyTwoTimeTwoToThePowerOfSixteenPrimalityTests] 
	@c INT = 1
SELECT @n = MAX(p) FROM dbo.primes;
INSERT INTO dbo.primes 
	SELECT p FROM dbo.[sixteenBitsMorePrimeTests](@n) OPTION (MAXRECURSION 32767);
SET @n = @c + 1;
IF @n < 32 
EXEC thirtyTwoHundredPrimalityTests @n ;    

Now if we like we can have our new procedure called iteratively to keep inserting primes into our table. The hardest part should be naming the parent procedure.

FOR XML and STUFF you might not know

If you are like me it’s not enough for you dynamic SQL to just be executed at design time, you have to make time to design the code you are dynamically building. What am I talking about? The first part is about not executing dynamic SQL as part of a data tier application wherever possible. That is to say dynamic SQL is not going to be able to reuse cached plans the way static and parameterized SQL can. The second part is about including formatting in your dynamic SQL, at least enough to make it readable.

Using dynamic SQL to write your code can be a huge time saver when you have a lot of code to write. It can also be a huge time suck when you have debug, optimize, or just get it all into one string ((N)VARCHAR field).

Dynamic SQL can make debugging challenging for a number of reasons. For starters Management Studio will be unable to detect a syntax error until its executed. Even after it executes you won’t be able to double click on the error message and jump to the line in the query editor. You can however use GO to separate your code into batches which double clicking the error will jump to beginning of. It is also difficult to divine from the error message (ERROR_LINE()) which line is actually causing the error.


Let’s not get sidetracked. If you want to noodle on those debugging consider this screenshot from SQL Server Management Studio 2014. If I were going to go off on a tangent it might be about the fact that the GO in the dynamic string does not behave the way it does outside of the string. That is to say the entire string fails in spite of the GO where only the batch containing the error does in the static SQL. And don’t get me started on how the execution plan calls a script a batch when determining cost relative to batch. Also worth noting is that the breakpoint encompasses both statements and the assignment where with static SQL a single statement is broken on.

When it comes to optimization of dynamic SQL I am going to simply link some reading material and comment. Jeremiah Peschka at Brent Ozar Unlimited has a great post about filtered indexes that kind of contradicts what I am saying about optimizing dynamic SQL. I say kind of because Jeremiah is using dynamic SQL to take advantage of a filtered index he created when a parameter might cause it to be skipped over. Andrew Kelly at SQL Server magazine mentions in this article that compiling is cpu intensive, he also goes on to show how sp_executesql allows us to cache plan. Derik Hammer explains and demonstrates the need to match the SQL text in order to get plan reuse. Turgay Sahtiyan echoes Derek, plus he is a Senior Premier Field Engineer (I was almost a PFE once). And last but certainly not least, Kimberly Tripp gave us some code to play with and a vigorous discussion with here post on multipurpose procedures. Of course, for all that good stuff we can use to avoid using dynamic SQL that is bad at execute time, our monitoring tools are probably only going to show us the generated SQL. As I tend to fall into the DBA roll, I make it a point to always attempt to optimize without changing the code that executes… at least when I don’t own it. But that is a topic for a whole other discussion perhaps on whether you can separate workloads into separate instances for ad hoc and prepared workloads.

The last obstacle is a matter of practical consideration. When we have multiple statements we might build and execute them separately. When we have nesting strings building dynamic SQL for us we might find it easier to have parts of our code in separate columns that copy as a row and paste as a single script. But what about when we are building dynamic strings that roll up some one to many values like columns and tables. Are we going to have to paste one long line and then format it? That would be another time suck to avoid. Enough teasing the title of the post said we were going to show an example of FOR XML and STUFF.
singleline Well, that example makes a single line, and I don’t feel like formatting myself. Sure, I could use a tool to auto-format but I don’t have it installed and I really want to automate the generation of these scripts and output them directly to script files. What if we try to stick in carriage return and a line feed?


What the heck is this weird emoticon? Is it a winking fish or something? No, it’s proof enough that the output of the FOR XML PATH statement is typed as XML. That is to say its an XML escape character. Specifically it’s the XML break character for carriage return. What happened to the line feed? They don’t use it paired with carriage return in the UTF world so SQL Server was goodly enough to drop it. You can try using NCHAR instead of CHAR but the results are the same. What if we use the break character?


Darn, now the ampersand is being escaped. Ok, we’ll just have to replace it.


There you have it. FOR XML and STUFF you might not know.