On 07/28/2016 02:51 PM, Michael S. Tsirkin wrote:
>> > If 1MB is too big, how about 512K, or 256K?  32K seems too small.
>> > 
> It's only small because it makes you rescan the free list.
> So maybe you should do something else.
> I looked at it a bit. Instead of scanning the free list, how about
> scanning actual page structures? If page is unused, pass it to host.
> Solves the problem of rescanning multiple times, does it not?

FWIW, I think the new data structure needs some work.

Before, we had a potentially very long list of 4k areas.  Now, we've
just got a very large bitmap.  The bitmap might not even be very dense
if we are ballooning relatively few things.

Can I suggest an alternate scheme?  I think you actually need a hybrid
scheme that has bitmaps but also allows more flexibility in the pfn
ranges.  The payload could be a number of records each containing 3 things:

        pfn, page order, length of bitmap (maybe in powers of 2)

Each record is followed by the bitmap.  Or, if the bitmap length is 0,
immediately followed by another record.  A bitmap length of 0 implies a
bitmap with the least significant bit set.  Page order specifies how
many pages each bit represents.

This scheme could easily encode the new data structure you are proposing
by just setting pfn=0, order=0, and a very long bitmap length.  But, it
could handle sparse bitmaps much better *and* represent large pages much
more efficiently.

There's plenty of space to fit a whole record in 64 bits.

Reply via email to