This one stems from a an interesting discussion around Delayed Durability and In Memory OLTP on LinkedIn, looking at how Delayed Durability might be the winner for performance that In Memory OLTP is billed to be. I think they have different purposes, but simply to improve throughput on specific workload types involving very high volume single record transactions, Delayed Durability could be a viable performance winner for you, with one caveat: you absolutely must be comfortable with some data loss.
What I was interested in was the point that SQL Server doesn’t always wait til the log block is full before flushing. So, using the code pinched from the first part of the SQL Server Premier Field Engineer Blog, I had a look at the log flushes on my box, a low spec sandbox SQL Server 2016 installation, with AdventureWorks2014 on it.
The results for the normal transactions look like this:-
Pretty much every transaction had a 512 byte flush, there were a couple where this stepped up to 1k. Total log write size was 6,015k, and the whole process took ~4 seconds.
The delayed durability run looked like this:-
What was interesting is that the log flush size is fairly random, despite most information on the topic suggesting that the full 60k block is filled before the flush takes place, this is clearly not the case. In total, 972 flushes took place as opposed to the 12,031 flushes from the first run, and the total size of log records written was 4,277k; the whole process took ~2 seconds.
As Aaron points out in his blog, there are some circumstances where the tran will be made fully durable; but it would be interesting to understand a little more about when these occur!
The full code I used is below if you wish to replicate the test, would be interesting to know your results and thoughts on this feature.
USE AdventureWorks2014 GO IF EXISTS (SELECT * FROM sys.server_event_sessions AS DXS WHERE name = 'LogRecords') DROP EVENT SESSION LogRecords ON SERVER; /* create our event session using the histogram target*/ CREATE EVENT SESSION [LogRecords] ON SERVER ADD EVENT sqlserver.log_flush_start( ACTION(sqlserver.database_name) WHERE ([database_id]=(10))) /* AW2014 is db 10 on my machine*/ ADD TARGET package0.histogram(SET filtering_event_name=N'sqlserver.log_flush_start', source=N'write_size',source_type=(0)) WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF) GO ALTER DATABASE AdventureWorks2014 SET DELAYED_DURABILITY = DISABLED; GO /* create our table of numbers for the sample */ IF OBJECT_ID('dbo.Numbers') IS NOT NULL DROP TABLE dbo.Numbers; GO CREATE TABLE dbo.Numbers ( Num INT NOT NULL, CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Num) ); GO SET STATISTICS IO OFF; SET NOCOUNT ON; /* turn on the session*/ ALTER EVENT SESSION LogRecords ON SERVER STATE = START; /* go watch the XE session at this point*/ SET STATISTICS TIME ON DECLARE @Number INT; SET @Number = 1; WHILE @Number <= 12000 BEGIN INSERT INTO dbo.Numbers (Num) SELECT @Number; SET @Number = @Number + 1; END; GO SET STATISTICS TIME OFF ALTER EVENT SESSION LogRecords ON SERVER STATE = STOP; /* turn on the delayed durability*/ ALTER DATABASE AdventureWorks2014 SET DELAYED_DURABILITY = FORCED; GO TRUNCATE TABLE dbo.Numbers; GO ALTER EVENT SESSION LogRecords ON SERVER STATE = START; /* go watch the XE session at this point */ SET STATISTICS TIME ON DECLARE @Number INT; SET @Number = 1; WHILE @Number <= 12000 BEGIN INSERT INTO dbo.Numbers (Num) SELECT @Number; SET @Number = @Number + 1; END; GO SET STATISTICS TIME OFF ALTER EVENT SESSION LogRecords ON SERVER STATE = STOP; GO