SQL Server Buffer Pool Not Expanding Beyond 16GB

I thought I would share one of the problem I’ve been looking at lately. Basically we have identified (so far) 3 instances where the buffer pool won’t expand beyond 16GB. Those instances are currently running SQL Server 2012 Enterprise Edition SP1 with CU5. We tried reproducing the issue on other instances at the same patch level but could not reproduce so far. I will focus on one instance in particular for the rest of the post.

If we look at some of the core memory metrics on the instance from perfmon, here’s what were seeing:

Under SQLServer:Memory Manager
Database Cache Memory (KB): 16 339 944
Free Memory (KB): 4 240 496
Target Server Memory: 29 360 128
Total Server Memory: 24 021 960

The VM on which this instance is running has Dynamic Memory enabled in Hyper-V with a 20% buffer (which is respected as far as we can tell) and has 32.6GB dynamically allocated by Hyper-V at the moment. We’re seeing an average of 4-6GB of free memory inside the guest and it doesn’t have a maximum configured in Hyper-V, we control the memory consumption via the dynamic memory settings of SQL Server which are:

Minimum: 28 672MB
Maximum: 44 032MB

Two of the VMs that have the problem have Lock Pages In Memory enabled and one doesn’t (pending a reboot to enable it). (Thanks to mitcharoni on reddit for reminding me of sharing that detail) The page file is auto-managed by Windows (default) and have a current size of 4GB (recommended value by Windows is 8GB).

The VMs are all running Windows Server 2012 on a Windows Server 2012 Hyper-V host.

By analyzing the content of the buffer pool over time, there’s quite a bit of churn in there. We see pages from tables/indices being loaded and evicted in a few minutes timeframe which represents sometimes 5-10GB of churn.

2015-03-06_buffer_pool_churn_by_db

Because of this, we can see that the PLE is very low on the instance with an average below 100 during nightly ETLs and never going above 600 during the day. We often see drops from ~500 down to 50 most likely due to a large number of pages being pushed out in favor of a large query being run. We would expect to see pressure on the buffer pool to expand in order to accommodate the demand but it always stays under 16GB.

We tried reproducing the issue on other instances with different patch levels but have been unable so far. The way we tested this was by running multiple times concurrently queries that would generate a clustered index scan over very large tables. In all the working cases, SQL Server grew its buffer pool until the maximum memory setting was reached and Hyper-V allocated dynamically additional memory to the VM to maintain its buffer.

Here’s a snapshot of the server’s stats in SQL Sentry for the last few days:

2015-03-06_problematic_server

If anyone has an idea as to what might happen here, let me know in the comments!

—–Update—–

I think I’ve found what was going on. The startup memory is set at 1 GB for the VM in Hyper-v and based on the document here, that looks like our issue:

https://msdn.microsoft.com/en-us/library/hh372970.aspx

Setting Startup RAM to a lower value allows more virtual machines to start, therefore achieving better consolidation density overall. In SQL Server 2008 R2 and earlier the buffer pool can only grow to a maximum of 16 times the starting memory of the Windows instance. For example, SQL Server running inside a virtual machine configured with 1 GB for Startup RAM can never utilize more than 16 GB of memory for the buffer pool. Take this into consideration when you configure Startup Memory for a virtual machine. In all of our tests, Startup Memory for the virtual machines was configured at 1 GB.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s