0 comments

SQL: Delete a Large Amount of Records

Published on Tuesday, January 28, 2014 in , ,

I’ve got a setup where an UAG array logs into a remote SQL. I’m still wondering how other people handle the database size. I’ve got a very small SQL agent job which runs once a week and deletes all records older than a month. The size of the database is still pretty large: somewhere around 150 GB. A while ago the job seemed to have problems to finish successfully. We also saw the log file of the database growing to a worrying size: 160 GB. FYI: the database was in simple recovery mode.

Each time I started the job I could see the log file filling up from 0 GB all the way to 160 GB and then it stopped as we set 160 GB as a fixed limit. We did this (temporary) to protect other log files on that volume. Here’s the SQL script (query) used in the cleanup job:

DELETE

FROM FirewallLog

WHERE logTime < DATEADD(MONTH, -1, GETDATE())

As you can see it’s a very simple query. The problem lies in the fact that SQL tries to perform this as one transaction. I hope I get the terminology right btw. A SQL database in simple mode should reuse log space quite fast. If I read correctly, about every minute a checkpoint is issued and it will start overwrite/reusing previously written bits in the log file. Now the problem with my query is that its seen as one large transaction and thus needs to be written away entirely in the log file. Hence the space is not reused during the execution. Here's a script which I found online and which does the job way more gently. In my example WebProxyLog is name of the table I’m targeting.

DECLARE @continue INT

DECLARE @rowcount INT

SET @continue = 1

WHILE @continue = 1

BEGIN

--PRINT GETDATE()

SET ROWCOUNT 10000

BEGIN TRANSACTION

DELETE FROM WebProxyLog WHERE  logTime < DATEADD(MONTH, -1, GETDATE())

SET @rowcount = @@rowcount

COMMIT

--PRINT GETDATE()

IF @rowcount = 0

BEGIN

SET @continue = 0

END

END

This script will have the same outcome, but it will delete records in chunks of 10.000 records at a time. This way each transaction is limited in size and the SQL server can benefit from the checkpoints being issued and can thus reuse database space. Using this approach my logging space was somewhere between 0 and 7 GB during the execution of this task. The ideal value for the maximum amount of records deleted at once might differ depending on your situation. I tend to execute this on calmer moments of the day and thus a bit of additional load is not that worrying.

Bonus tip:  you can easily poll for the free space in the log files using this statement:

DBCC SQLPERF(LOGSPACE);

GO

Related Posts

No Response to "SQL: Delete a Large Amount of Records"

Add Your Comment