n a recent article, Issues with running DBCC SHRINKFILE on your data files wrote about why not to shrink a data file with DBCC SHRINKFILE. The issues with shrinking data files that I described do not apply to log files, which have a much different structure. Inside the transaction log are one of SQL Server’s internal structures: Virtual Log Files (VLF). The log is divided into VLF’s so that space can be allocated and re-used once the transaction log entries in the VLF have been backed up. There’s a detailed description of Virtual Log files in the article How to determine SQL Server database transaction log usage
To see how many VLF’s are contained in a database go to the database run the DBCC LOGINFO command as seen here:
FileId FileSize artOffset SeqNo Status Parity CreateLSN ------ -------- --------- ----- ------ ------ ------------------- 2 253952 8192 31239 0 128 0 2 262144 262144 31243 0 128 0 2 262144 524288 31242 0 128 96000000012800004 . . rows omitted . 2 524288 22675456 31229 0 128 6733000000065600011 2 524288 23199744 31230 0 128 6733000000065600011 2 524288 23724032 31231 0 128 6733000000065600011 2 720896 24248320 31232 0 128 6733000000065600011 (71 row(s) affected) DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Issuing a BACKUP LOG with TRUNCATE_ONLY or BACKUP LOG with NO_LOG use to be a common solution to clear out the transaction log so that it could be shrunk. It was never a great idea and the article
The reason that it’s a procedure at all is that sometimes it’s necessary to make multiple transaction logs and run SHRINKFILE each time before the log shrinks to the desired size. To accommodate this reality, ns_shrink_db_log runs in a loop. There are several parameters this stored procedure takes:
- @db_name – database name that you want to shrink log file
- @target_size_mb – the desired size of the tranascaion log
- @backup_location – location of the backup files
- @backup_file_name – name for the backup files. As each attempt is made the attempt number is added to the back of the file name along with the standard extension “.trn”.
- @maximum_attempts – governs how many times it tries.
Here’s the code:
The procedure is easy to execute.
Starting size of [scratch].[scratch_log] is 16 MB recovery model = FULL BACKUP LOG [scratch] to disk = 'c:tempscratch_shirnk_backup1.trn' Processed 1 pages for database 'scratch', file 'scratch_log' on file 5. BACKUP LOG successfully processed 1 pages in 0.014 seconds (0.139 MB/sec). DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages ------ ----------- ----------- ----------- ----------- -------------- 7 2 256 128 256 128 DBCC execution completed. If DBCC printed error messages, contact your system administrator. Final size of [scratch].[scratch_log] is 2 MB
One thing to remember once you’ve run ns_shrink_db_log: those backup files are for real. They’re on disk and they should be included in your normal backup procedure. Most of my clients copy the .trn files off to another server for safe keeping. If possible the other server should be in a different location to allow for disaster recovery.
Before going out and shrinking the logs in all your databases because it’s so easy now, step back and reconsider why you’d ever shrink a log file. The log file grew for a reason. Logs grow because transactions modify the database and there’s more transaction activity then transaction log backups. If the logs grew to that size once, isn’t it pretty likely that they’ll grow again? In most cases the answer is yes. If there was something unusual that caused the logs to grow, you might have reason to shrink the log. However, it’s often the case that the logs will just grow back to their previous size.
One of the events that you’ll want to avoid is autogrowth on the log file during the hours that users are using the database. Log growth is slow and when it’s necessary transactions must wait for the file to grow and be initialized. Log files are not subject to “Instant File Initialization” the way data files are.