On Mon 18-03-13 13:44:05, KY Srinivasan wrote:
> 
> 
> > -----Original Message-----
> > From: Michal Hocko [mailto:mho...@suse.cz]
> > Sent: Monday, March 18, 2013 6:53 AM
> > To: KY Srinivasan
> > Cc: gre...@linuxfoundation.org; linux-kernel@vger.kernel.org;
> > de...@linuxdriverproject.org; o...@aepfle.de; a...@canonical.com;
> > a...@firstfloor.org; a...@linux-foundation.org; linux...@kvack.org;
> > kamezawa.hiroy...@gmail.com; han...@cmpxchg.org; ying...@google.com
> > Subject: Re: [PATCH 2/2] Drivers: hv: balloon: Support 2M page allocations 
> > for
> > ballooning
> > 
> > On Sat 16-03-13 14:42:05, K. Y. Srinivasan wrote:
> > > While ballooning memory out of the guest, attempt 2M allocations first.
> > > If 2M allocations fail, then go for 4K allocations. In cases where we
> > > have performed 2M allocations, split this 2M page so that we can free this
> > > page at 4K granularity (when the host returns the memory).
> > 
> > Maybe I am missing something but what is the advantage of 2M allocation
> > when you split it up immediately so you are not using it as a huge page?
> 
> The Hyper-V ballooning protocol specifies the pages being ballooned as
> page ranges - start_pfn: number_of_pfns. So, when the guest balloon
> is inflating and I am able to allocate 2M pages, I will be able to
> represent 512 contiguous pages in one 64 bit entry and this makes the
> ballooning operation that much more efficient. The reason I split the
> page is that the host does not guarantee that when it returns the
> memory to the guest, it will return in any particular granularity and
> so I have to be able to free this memory in 4K granularity. This is
> the corner case that I will have to handle.

Thanks for the clarification. I think this information would be valuable
in the changelog.
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to