From SQL Server 2008, the datetime2 datatype was introduced, with greater flexibility for precision and also with an added bonus that it was truly compliant with ANSI and ISO 8601.  For more info, read up on the Books Online entry for datetime2.

While not everybody (including Microsoft!) have adopted this datatype, if you really want accuracy for your datetime columns, you’re better off switching to this type, and I know quite a few implementations that do.

Continue reading


SQL Server – Why storing dates as strings is a bad idea

Consider the following value: ’01-02-10′.  Is it the 1st February 2010?  Is it the 2nd of January 2010?  Is it the 10th of February 2001?

All three of those answers are potentially valid, and you can convert that string into all three:-

DECLARE @DateString CHAR(8) = '01-02-10';









Continue reading

SQL Server – Understanding the TOP operator and implications for batch processing

I was debating using a picture of Kylie Minogue in this blog post, from her video for the song “Spinning Around”.  Because it is relevant, and not just because it’s a picture of Kylie….

Kylie likes loops

Anyway.  Today, we had a problem where “all of a sudden” CPU was maxing out on a server, and the disk I/O was going through the roof.  Uh-oh, I thought, today is going to be rubbish.  Rolling up my sleeves, I opened up SQL Sentry Performance Advisor to have a look what was going on.

Continue reading

SQL Server – Top 5 things I learnt at SQL Skills’ IEPTO1 course

Last week, I was fortunate enough to be on SQL Skills’ Immersion Event on Performance Tuning and Optimization Part 1 with Paul S Randal and Kimberly L Tripp.  I’d highly recommend the course, which was excellent albeit a little tiring with five full in depth days diving into SQL Server’s internals.  

Continue reading

SQL Server – Odd behaviour with cached temp tables, a linked server and MSDTC (failing)

I recently encountered some strange behaviour occurring in a production system, whereby a procedure called from an application failed due to the error “unable to enlist in a distributed transaction”.  However, running the procedure from SSMS worked fine, and led to the procedure working again without problem from the application.


Continue reading

SQL Server – Security: Schemas, Ownership Chaining and the dreaded TRUSTWORTHY flag – Part 1


Warning: this post involves me standing on a soapbox: it’s going to be a bit of a rant/preach about why everyone should care about security!  Starting with….


When I talk to my developer colleagues, very few of them realise that database schemas are actually a security feature of SQL Server.  They also don’t appreciate that security in a database needs to be designed, it doesn’t just happen.  If it’s considered from an early point in the design of the database and application, then it will be much easier to implement and cause them less headaches in the long run when they ask me for some elevated permission or other and I say “no, because….” and start explaining about ownership chaining and a whole bunch of other security related stuff that makes them roll their eyes at me…. ahem, I digress.
Continue reading

SQL Server – Achieving minimally logged inserts into partitioned table with a computed column as the partitioning key

A while ago, I started a thread on SQLServerCentral discussing tactics to go about partitioning a huge table in respect to index alignment.  During the discussion, Steve Hall pointed out a great way of working with cycling partitions using a computed column that meant some of the maintenance aspects of partition archiving were a nice doddle.  I’ve since incorporated this tactic as a standard method to partition tables that have a rolling window of archiving, and for the most part it works very nicely.

Recently, we’ve had a requirement to move another large heap table into a partitioning structure to ease maintenance on the table, and ensure we aren’t just storing unneeded data forever.  Afterall, querying a 98GB heap with around half a billion rows just isn’t a good idea anyway!  In this instance, however, the old data needed migrating into the new table, which seemed to create a bit of a problem in terms of the logging level. Continue reading