I recently encountered some strange behaviour occurring in a production system, whereby a procedure called from an application failed due to the error “unable to enlist in a distributed transaction”. However, running the procedure from SSMS worked fine, and led to the procedure working again without problem from the application.
This behaviour worried me. I couldn’t quite work out what was happening, especially when digging into the code of the procedure – all that was happening was a temp table was being created, and subsequently populated with a small set of data from a linked server. This was then joined to some local data and returned.
The key was the distributed transaction (which, incidentally, wasn’t really needed at all for this particular operation, but that’s a different matter and for another discussion on transaction handling). What had happened was the MSDTC service on the remote server had failed, which was preventing the transaction being enlisted. However, what was more interesting to me, is why this behaviour was no longer an issue after the procedure had been run manually on the server through SSMS.
It took much scratching of my head, but eventually I figured out the problem. I had been doing some research into temp table caching for a class I was doing for developers, actually looking at statistics on cached temp tables. I decided to have a look at the sys.objects catalogue view for, and noticed that when this was occurring, the temp table wasn’t cached. I had a bit of a eureka moment: this completely explained why running the procedure manually appeared to solve the problem, as it re-cached the temp table!
The real problem, obviously, was that DTC was down on the target server, but it was being manifest because SQL required a transaction to create the temp table, but only if it wasn’t already created in cache, and it couldn’t enlist this transaction because MSDTC was screwed up.