Home » Transaction log file VLF management

Transaction log file VLF management

sql server DBA 999

One of the important things I would recommend is not having transaction log VLFs be too large, with 500MB being recommended maximum, so that it is not to interrupt the clearing/truncating of the log.The Reason for this is that transaction log could only clear a VLF when TLog Backup is taking place only when it’s completely inactive. So that means larger the VLF then bigger VLF to be cleared and for it to inactive.

Other important thing I would recommend is not having too many transaction log VLFs. Having an excessive number of VLFs can negatively impact all transaction log related activities and you may even see degradation in performance when transaction log backups occur. Often, when transaction logs are not pre-allocated and/or when there’s been a lot of autogrowths, the transaction log can become internally fragmented. Internally your transaction logs are broken down into smaller more granular chunks called VLFs (Virtual Log Files) which is explained below:

It is always recommended to intial size your ldf file instead of leaving it autogrowth with 10%.
Now there are few considerations when you look at size of Log file and VLF’s(Virtual Log files) created when each time Log size is increased.
Now, let’s see the log is divided into VLFs. Each “chunk” that is added, is divided into VLFs at the time the log growth (regardless of whether this is a manual or auto-grow addition) and it’s all dependant on the size that is ADDED not the size of the log itself. So, take a 10MB log that is extended to 50MB, here a 40MB chunk is being added. This 40MB chunk will be divided into 4 VLFs. Here’s the breakdown for chunksize:
chunks less than 64MB = 4 VLFs
chunks of 64MB and less than 1GB = 8 VLFs
chunks of 1GB and larger = 16 VLFs
And, what this translates into is that a transaction log of 64GB would have 16 VLFs of 4GB each. As a result, the transaction log could only clear at more than 4GB of log information AND that only when it’s completely inactive.
To have a more ideally sized VLF, consider creating the transaction log in 8GB chunks (first 8GB, then extend it to 16GB, then extend it to 24GB and so forth) so that the number (and size) of your VLFs is more reasonable (in this case 512MB).

In general, most transaction logs will only have 20 or 30 VLFs – even 50 could be reasonable depending on the total size of the transaction log.
However, in many cases what happens is that excessive autogrowths can cause an excessive number of VLFs to be added – sometimes resulting in hundreds of VLFs. Having an excessive number of VLFs can negatively impact all transaction log related activities and you may even see degradation in performance when transaction log backups occur.
To see how many VLFs you have solely look at the number of rows returned by
DBCC LOGINFO
More on VLF’s on http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx

The number of rows returned equals the number of VLFs your transaction log file has. If you have more than 50, would recommend fixing it and adjusting your autogrowth so that it doesn’t occur as fequently. To get rid of all of the execessive VLFs, follow these easy steps to shrink off the fragmented chunk and add a new, clean chunk to your transaction log:

1. Wait for an inactive time of day (ideally, it would be best to put the database into single user mode first/Outage on application) and then clear all transaction log activity through a regular transaction log backup. If you’re using the simple recovery model then you don’t need to do a log backup… Instead, just clear the transaction log by running a checkpoint.

BACKUP LOG databasename TO devicename

2. Shrink the log to as small a size as possible (truncateonly)

DBCC SHRINKFILE(transactionloglogicalfilename, TRUNCATEONLY)

NOTE: if you don’t know the logical filename of your transaction log use sp_helpfile to list all of your database files.
3. Alter the database to modify the transaction log file to the appropriate size – in one step

ALTER DATABASE databasename
MODIFY FILE
(
NAME = transactionloglogicalfilename
, SIZE = newtotalsize
)

Whenever a Log file is created/increased the SQL Server performs the zero initialization of the log file and the architectural reason for this as below:
Each log sector have parity bits stamped on it. When the log is first created, it is zero-initialized (with zero being an illegal log sector parity value). As the log is written, each sector has parity bits in it. When the end of the log is reached, and it wraps around to the start of the log file, the parity bits are flipped, so that overwritten log sectors have the opposite parity from when they were last written. When a crash occurs, log sectors are read and processed until a log sector with an out-of-sequence parity is found.

This entire process will not work if there’s already random data in the space used by the log file – some of the random data could just look like a valid set of parity bits and cause the recovery system to try to process a log sector full of garbage, leading to a suspect database.

This really is a good architectural reason why instant initialization cannot be done with the transaction log as opposed to Data File.

Leave a Reply

Your email address will not be published. Required fields are marked *