What sort of memory overcommit value are you running Nova with? The scheduler 
looks at an instance's reservation rather than how much memory is actually 
being used by QEMU when making a decision, as far as I'm aware (but please 
correct me if I am wrong on this point). If the HV has 128GB of memory, the 
instance has a reservation of 96GB, you have 16GB reserved via 
reserved_host_memory_mb, ram_allocation_ratio is set to 1.0, and you try to 
launch an instance from a flavor with 32GB of memory, it will fail to pass 
RamFilter in the scheduler and the scheduler will not consider it a valid host 
for placement. (I am assuming you are using FilterScheduler still, as I know 
nothing about the new placement API or what parts of it do and don't work in 
Newton.)

As far as why the memory didn't automatically get reclaimed, maybe KVM will 
only reclaim empty pages and memory fragmentation in the guest prevented it 
from doing so? It might also not actively try to reclaim memory unless it comes 
under pressure to do so, because finding empty pages and returning them to the 
host may be a somewhat time-consuming operation.

From: jp.met...@planethoster.info 
Subject: Re: [Openstack-operators] Memory usage of guest vms, ballooning and 
nova

Hi,

This is indeed linux, CentOS 7 to be more precise, using qemu-kvm as 
hypervisor. The used ram was in the used column. While we have made 
adjustments by moving and resizing the specific guest that was using 96 
GB (verified in top), the ram usage is still fairly high for the amount 
of allocated ram.

Currently the ram usage looks like this :

               total        used        free      shared buff/cache   
available
Mem:           251G        190G         60G         42M 670M         60G
Swap:          952M        707M        245M


I have 188.5GB of ram allocated to 22 instances on this node. I believe 
it's unrealistic to think that all these 22 instances have cached/are 
using up all their ram at this time.

On 2017-03-23 13:07, Kris G. Lindgren wrote:
> Sorry for the super stupid question.
>
> But if this is linux are you sure that the memory is not actually being 
> consumed via buffers/cache?
>
> free -m
>                        total          used        free      shared       
> buff/cache   available
> Mem:         128751       27708    2796     4099          98246          96156
> Swap:          8191           0             8191
>
> Shows that of 128GB 27GB is used, but buffers/cache consumes 98GB of ram.
>
> ___________________________________________________________________
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> On 3/23/17, 11:01 AM, "Jean-Philippe Methot" <jp.met...@planethoster.info> 
> wrote:
>
>      Hi,
>      
>      Lately, on my production openstack Newton setup, I've ran into a
>      situation that defies my assumptions regarding memory management on
>      Openstack compute nodes and I've been looking for explanations.
>      Basically, we had a VM with a flavor that limited it to 96 GB of ram,
>      which, to be quite honest, we never thought we could ever reach. This is
>      a very important VM where we wanted to avoid running out of memory at
>      all cost. The VM itself generally uses about 12 GB of ram.
>      
>      We were surprised when we noticed yesterday that this VM, which has been
>      running for several months, was using all its 96 GB on the compute host.
>      Despite that, in the guest, the OS was indicating a memory usage of
>      about 12 GB. The only explanation I see to this is that at some point in
>      time, the host had to allocate all the 96GB of ram to the VM process and
>      it never took back the allocated ram. This prevented the creation of
>      more guests on the node as it was showing it didn't have enough memory 
> left.
>      
>      Now, I was under the assumption that memory ballooning was integrated
>      into nova and that the amount of allocated memory to a specific guest
>      would deflate once that guest did not need the memory. After
>      verification, I've found blueprints for it, but I see no trace of any
>      implementation anywhere.
>      
>      I also notice that on most of our compute nodes, the amount of ram used
>      is much lower than the amount of ram allocated to VMs, which I do
>      believe is normal.
>      
>      So basically, my question is, how does openstack actually manage ram
>      allocation? Will it ever take back the unused ram of a guest process?
>      Can I force it to take back that ram?
>      
>      --
>      Jean-Philippe Méthot
>      Openstack system administrator
>      PlanetHoster inc.
>      www.planethoster.net
>      
>      
>      _______________________________________________
>      OpenStack-operators mailing list
>      OpenStack-operators@lists.openstack.org
>      http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>      
>

-- 
Jean-Philippe Méthot
Openstack system administrator
PlanetHoster inc.
www.planethoster.net


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to