Tuesday, 30 July 2013

Handling Blocking in SQL 2005 with LOCK_TIMEOUT and TRY CATCH statements

Handing Blocking and Deadlocks in SQL Stored Procedures

We recently had an issue at night during the period in which daily banner hit/view data is transferred from the daily table to the historical table.

During this time our large website was being hammered by BOTs and users and we were getting lots of timeout errors reported due the the tables we wanted to insert our hit records into being DELETED and UPDATED causing locks.

The default lock time is -1 (unlimited) but we had set it to our default command timeout of 30 seconds.

However if the DELETE or UPDATE in the data transfer job took over 30 seconds then the competing INSERT (to insert a banner hit or view) would time out and error with a database timeout due to the Blocking process not allowing our INSERT to do its job.

We tried a number of things including:

  • Ensuring all tables were covered by indexes to speed up any record retrieval
  • Reducing the DELETE into small batches of 1000 or 100 at a time in a WHILE loop to reduce the length of time the LOCK was held each time.
  • Ensuring any unimportant SELECT statements from these tables were using WITH (NOLOCK) to get round any locking issues.

However none of these actually helped solve the problem so in the end we rewrote our stored procedure (SQL 2005 - 2008) so that it handled the LOCK TIMEOUT error and didn't return an error.

In SQL 2005 you can make use of TRY CATCH statements which meant that we could try a certain number of times to insert our data and if it failed we could just return quickly without an error as we also used a TRANSACTION to enable us to ROLLBACK or COMMIT the transaction.

We also set the LOCK_TIMEOUT to 500 milliseconds (so x 3 = 1.5 seconds) as if the insert couldn't be done in that time frame then there was no point logging it. We could have inserted it into another table to be added to our statistics later on but that is another point.

The code is below and shows you how to trap BLOCKING errors including DEADLOCKS and handle them.

Obviously this doesn't fix anything it just "masks" the problem from the end user and reduces the number of errors due to database timeouts due to long waiting blocked processes.

CREATE PROCEDURE [dbo].[usp_net_update_banner_hit]

@BannerIds varchar(200), -- CSV of banner IDs e.g 100,101,102

@HitType char(1) = 'V', -- V = banner viewed, H = banner hit

AS

SET NOCOUNT ON

SET LOCK_TIMEOUT 500 -- set to half a second


DECLARE @Tries tinyint

-- start at 1
SELECT @Tries = 1


-- loop for 3 attempts

WHILE @Tries <= 3

  BEGIN

 BEGIN TRANSACTION

 BEGIN TRY

  -- insert our banner hits we are only going to wait half a second

  INSERT INTO tbl_BANNER_DATA
  (BannerFK, HitType, Stamp)
  SELECT  [Value], @HitType, getdate()
  FROM dbo.udf_SPLIT(@BannerIds,',') -- UDF that splits a CSV into a table variable
  WHERE [Value] > 0

  --if we are here its been successful ie no deadlock or blocking going on
  COMMIT    

  -- therefore we can leave our loop
  BREAK

 END TRY

 -- otherwise we have caught an error!
 BEGIN CATCH

  --always rollback   
  ROLLBACK

  -- Now check for Blocking errors 1222 or Deadlocks 1205 and if its a deadlock wait for a while to see if that helps

  IF ERROR_NUMBER() = 1205 OR ERROR_NUMBER() = 1222

    BEGIN

   -- if its a deadlock wait 2 seconds then try again
   IF ERROR_NUMBER() = 1205
     BEGIN

    -- wait 2 seconds to see if that helps the deadlock
    WAITFOR DELAY '00:00:02'

     END   

       -- no need to wait for anything for BLOCKING ERRORS as our LOCK_TIMEOUT is going to wait for half a second anyway
       -- and if it hasn't finished by then (500ms x 3 attempts = 1.5 seconds) there is no point waiting any longer

    END      

  -- increment and try again for 3 goes
  SELECT @Tries = @Tries + 1

  -- we carry on until we reach our limit i.e 3 attempts
  CONTINUE    

   END CATCH

  END

3 comments:

Unknown said...


Nice Article !

Really this will really help to people of Database Community.
I have also prepared small note on this, What is timeout, lock, block and deadlock of Database System.

http://www.dbrnd.com/2016/04/database-theory-what-is-timeout-lock-block-and-deadlock-in-a-database-system/

Anonymous said...

I believe the 'begin try' should be before the 'begin transaction'.
rgds

Rob Reid said...

Why do you believe that?

This is taken from working code on a system running 7+ years verbatim.

The transaction is started, and then a Try/Catch is performed with the code that needs to be run inside the TRY section.

If the code runs ok and doesn't error then I can call the COMMIT before the BREAK without a need to test for IF @@TRANCOUNT > 0 as I know the transaction was opened before the TRY started. Same as if we get a lock or error and jump to the CATCH section, I can ROLLBACK straight away before checking for the error type.