SQL Server – The cost benefits of performance tuning

At the time of writing, a 2 pack of SQL Server 2016 Enterprise Core licenses is around the $15,000 mark. Given that many installations are now set up in three node Availability Groups, this means that 2 core licenses for your Availability Group will be setting you back $30,000 (so long as you aren’t using the third node for anything other than DR purposes).

The reason I bring this up, is that there is an intrinsic cost benefit to performance tuning. Imagine a SQL Server that is running with hot CPU, and on analysis, you see that there are a bunch of poorly performing queries doing lots of parallel table scans. By tuning those queries, you can bring the CPU down, and then potentially remove the CPU cores from the server itself – thus saving you a packet on your license cost.

Furthermore, spending a bit more on disk subsystems or more RAM can quite easily have a very positive effect on CPU usage – more data in RAM means less CPU required to pull those data pages from disk (and wait while your disk subsystem carries this out at sub optimal speeds) thus reducing your CPU footprint. Even if you can only reduce your CPU by a 2 pack, $15,000 buys you a lot of RAM and disk.

So, next time you hear someone saying they don’t have time to performance tune, or don’t see the value in performance tuning, remind them the cost of those core licenses!

A word to IT recruiters

Recently, I started looking for a new position, and fortunately that search has concluded successfully.  I thought I’d take the opportunity whilst serving my notice to share some thoughts on my interactions with recruitment agents, both during this search (which didn’t take long) and at other times when I wasn’t actively looking.

Continue reading

SQL Server – Issues I Have With SSDT Based Deployments: Part 4.2

Continuing the section on trying to get a large legacy database imported into an SSDT project that actually builds, we’re at a point now where we should have genuine reference issues to clean up; all our cross database references should be sorted via a database reference to a DACPAC (with or without changing everything to use SQLCMD variables or not).

At this point, we’re going to get really annoyed with all the legacy, simply doesn’t work anyway, code left in our database over years of not tidying up.  Technical debt sucks.

66829535

Lets look at some of the things that are going to annoy us! Continue reading

SQL Server – Issues I have with SSDT based deployments: Part 4.1

Continuing my series on SSDT based deployments, in this part, I’m going to look into something that’s not entirely SSDTs fault, but I’m sure could be less painful: importing pre-existing databases into an SSDT project.  Particularly ones that have cross database references, or three part naming of objects (database.schema.object) which effectively make them a self referencing object.

Firstly, I want to state that I don’t really like cross database references.  In a world of filegroups, schemas and so on, I think a large number of multiple database setups should probably just be merged into a single db.  I address quite a few of these thoughts in my article SQL Server – When is a new database appropriate?

However, I’m also aware that this refactoring takes time, and that the damage is already there.  Add this into the fact that these databases will most likely not have been developed with any source control, and so somehow we have to drag them, kicking and screaming, into TFS and an SSDT project…..

1581831440-jesus_kicking_and_screaming_cartoon

Continue reading

SQL Server – Issues I have with SSDT based deployments: Part 2

In my previous post, I alluded to some of the issues I’ve found using SSDT based deployments into a live server, with lots of data.  For me, these are the configuration options that could cause significant pain performing a live release using the SSDT dacpac publish, that don’t cause a problem in development.

In this post, I’m going to look at the Database settings, and in particular, how filegroups behave when you perform a deployment.

Continue reading

SQL Server – Issues I have with SSDT based deployments: Part 1

Continuous integration is rightly a hot topic in today’s world where the time desired to bring products to market is ever shrinking.  Database lifecycle management is a key part of this, and generally database professionals have been slow to adopt continuous integration methodologies, often hiding being frameworks such as ITIL to point to issues of safety and caution.

It’s undeniable that a database release is a much more complicated beast than an application release, due to one really key part of a database: data.  Unlike an application, you can’t just destroy your database and start over with a clean slate at every release; if you did, your data, and likely your business, will be gone.  Whoops.  This is where I start getting a little nervous with some of the options available in SSDT deployments, such as this one:-

SSDT_DropDB.jpg

Continue reading

SQL Server – SYSDATETIME() versus GETDATE()

From SQL Server 2008, the datetime2 datatype was introduced, with greater flexibility for precision and also with an added bonus that it was truly compliant with ANSI and ISO 8601.  For more info, read up on the Books Online entry for datetime2.

While not everybody (including Microsoft!) have adopted this datatype, if you really want accuracy for your datetime columns, you’re better off switching to this type, and I know quite a few implementations that do.

Continue reading

SQL Server – When is a new database appropriate?

I frequently get requests from our developers to create new databases.  This is no biggie, and using the Standardising new database creation using Powershell and SMO scripts takes no time at all to implement.  However, something that usually isn’t considered at the point of a request is: why do we need a new database?

Continue reading

SQL Server – Delayed Durability and Log Flushes

This one stems from a an interesting discussion around Delayed Durability and In Memory OLTP on LinkedIn, looking at how Delayed Durability might be the winner for performance that In Memory OLTP is billed to be.  I think they have different purposes, but simply to improve throughput on specific workload types involving very high volume single record transactions, Delayed Durability could be a viable performance winner for you, with one caveat: you absolutely must be comfortable with some data loss.

Continue reading