SQL Server divides the total memory dedicated to it into two parts referred to as Buffer Pool and Memory to Leave.
Buffer Pool is the larger of the two and is used for most memory operations like stored procedure query plan generation, caching data and creating indexes.
Memory to Leave is dedicated to executing linked server queries, extended stored procedure calls and OLE automation procedure calls. By default, Memory to Leave is 384 MB and the balance of total memory available to SQL Server will determine the size of Buffer Pool.
An important counter of the performance of buffer cache is the Buffer Cache Hit Ratio performance counter. It indicates the percentage of data pages found in the buffer cache as opposed to disk. A value of 95% indicates that pages were found in memory 95% of the time. The other 5% required physical disk access. A consistent value below 90% indicates that more physical memory is needed on the server.
One of the important counters that will show how long SQL Server is able to cache data in memory(measured in seconds) is SQL Server: Buffer Manager – Page Life Expectancy. Higher numbers on this counter are better. If it starts going below 300 seconds (5 minutes) then a DBA should start thinking about Memory. But in situations like extremely large data warehouses (Tera Bytes) where we simply can’t cache more than a few minutes of data in memory no matter how much memory we add.
Memory in SQL Server makes up for a lot of things:
• Databases that don’t have good indexes designed
• CPUs that can’t build query plans fast enough
• Slow, cheap storage (like SATA hard drives and 1Gb iSCSI drives)
• Programs that needlessly retrieve too much data
SQL Server Caching Queries to Ease CPU Pressure
When end users request data, SQL Server has to compile an execution plan: a task list of which tables it needs to hit, what order it needs to hit them, and when to do operations like sorts and calculations. Your end users write some pretty dang bad queries, and execution plans can end up complicated/sophisticated. Compiling an execution plan like this is hard work, and hard work means CPU time.
SQL Server will save that execution plan in memory, and if somebody sends in a query like that again later, it will be able to reuse this plan instead of building a new one.
To determine how much that’s helping SQL Server performance, check out the Perfmon counters for SQL Server: SQL Statistics – Batch Requests/sec and Compilations/sec. Batch Requests is the number of incoming queries, and Compilations is the number of new plans we had to build.
Compilations/sec is more than 10-25% of Batch Requests/sec, you may be experiencing CPU pressure because SQL Server has to build execution plans.
DBCC MEMORYSTATUS command can be used to monitor memory usage on SQL Server.
SQL Server’s Memory Model
Memory nodes are the lowest level allocator, and you can view info about them in sys.dm_os_memory_nodes.
Memory clerks are the next level, and they’re used to access nodes. When something in SQL wants memory, they ask the clerks, and the clerks allocate nodes. There’s a general MEMORYCLERK_SQLGENERAL, and then heavy memory users get their own clerks, like MEMORY_CLERK_SQLBUFFERPOOL and MEMORYCLERK_SQLQUERYPLAN. You can check what they’re using with the DMV sys.dm_os_memory_clerks.
Some useful Queries using DMV’s:
To check where SQL Server's memory is going we can use below query.
sum(virtual_memory_reserved_kb) as [VM Reserved],
sum(virtual_memory_committed_kb) as [VM Committed],
sum(awe_allocated_kb) as [AWE Allocated],
sum(shared_memory_reserved_kb) as [SM Reserved],
sum(shared_memory_committed_kb) as [SM Committed],
sum(multi_pages_kb) as [MultiPage Allocator],
sum(single_pages_kb) as [SinlgePage Allocator]
group by type
order by 8 desc
The query below returns queries currently waiting on memory along with the query plan. It's a useful thing to check when a fast server becomes slow.
from sys.dm_exec_query_memory_grants MG
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS APPLY sys.dm_exec_query_plan(MG.plan_handle)