On Fri, Oct 11, 2013 at 05:30:17PM -0300, Marcelo Tosatti wrote: > On Fri, Oct 11, 2013 at 08:38:31AM +0300, Gleb Natapov wrote: > > > n_max_mmu_pages is not a suitable limit to throttle freeing of pages via > > > RCU (its too large). If the free memory watermarks are smaller than > > > n_max_mmu_pages for all guests, OOM is possible. > > > > > Ah, yes. I am not saying n_max_mmu_pages will throttle RCU, just saying > > that slab size will be bound, so hopefully shrinker will touch it > > rarely. > > > > > > > > and, in addition, page released to slab is immediately > > > > > > available for allocation, no need to wait for grace period. > > > > > > > > > > See SLAB_DESTROY_BY_RCU comment at include/linux/slab.h. > > > > > > > > > This comment is exactly what I was referring to in the code you quoted. > > > > Do > > > > you see anything problematic in what comment describes? > > > > > > "This delays freeing the SLAB page by a grace period, it does _NOT_ > > > delay object freeing." The page is not available for allocation. > > By "page" I mean "spt page" which is a slab object. So "spt page" > > AKA slab object will be available fo allocation immediately. > > The object is reusable within that SLAB cache only, not the > entire system (therefore it does not prevent OOM condition). > Since object is allocatable immediately by shadow paging code the number of SLAB objects is bound by n_max_mmu_pages. If there is no enough memory for n_max_mmu_pages OOM condition can happen anyway since shadow paging code will usually have exactly n_max_mmu_pages allocated.
> OK, perhaps it is useful to use SLAB_DESTROY_BY_RCU, but throttling > is still necessary, as described in the RCU documentation. > I do not see what should be throttled if we use SLAB_DESTROY_BY_RCU. RCU comes into play only when SLAB cache is shrunk and it happens far from kvm code. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html