On 03/11/2015 12:18 PM, Stefan Hajnoczi wrote:
On Mon, Mar 02, 2015 at 06:20:00PM -0500, John Snow wrote:
+static void dirty_bitmap_truncate(BdrvDirtyBitmap *bitmap, uint64_t size)
+{
+    /* Should only be frozen during a block backup job, which should have
+     * blocked any resize actions. */
+    assert(!bdrv_dirty_bitmap_frozen(bitmap));
+    hbitmap_truncate(bitmap->bitmap, size);
+}
+
+void bdrv_dirty_bitmap_truncate(BlockDriverState *bs)
+{
+    BdrvDirtyBitmap *bitmap;
+    uint64_t size = bdrv_nb_sectors(bs);
+
+    QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) {
+        if (bdrv_dirty_bitmap_frozen(bitmap)) {
+            continue;
+        }
+        dirty_bitmap_truncate(bitmap, size);

If you inline this function here then the discussion about assert() vs
skipping frozen bitmaps goes away.  Why is dirty_bitmap_truncate() a
function?


Symmetry with other bitmap functions.

+    }
+}

Why is bdrv_dirty_bitmap_truncate() a public API?  I expected this code
to be inline or called as a static function by bdrv_truncate().


OK, fixing that.

  /**
+ * hbitmap_truncate:
+ * @hb: The bitmap to change the size of.
+ * @size: The number of elements to change the bitmap to accommodate.
+ *
+ * truncate or grow an existing bitmap to accommodate a new number of elements.
+ * This may invalidate existing HBitmapIterators.
+ */
+void hbitmap_truncate(HBitmap *hb, uint64_t size);

Please include a tests/test-hbitmap.c test case.

Interesting cases:
1. New size equals old size (odd but possible)
2. Growing less than sizeof(unsigned long)
3. Growing more than sizeof(unsigned long)
4. Shrinking less than sizeof(unsigned long)
5. Shrinking more than sizeof(unsigned long)


;_; OK, you're right...

+void hbitmap_truncate(HBitmap *hb, uint64_t size)
+{
+    bool truncate;
+    unsigned i;
+    uint64_t num_elements = size;
+    uint64_t old;
+
+    /* Size comes in as logical elements, adjust for granularity. */
+    size = (size + (1ULL << hb->granularity) - 1) >> hb->granularity;
+    assert(size <= ((uint64_t)1 << HBITMAP_LOG_MAX_SIZE));
+    truncate = size < hb->size;

Here "truncate" means "shrink".

"shrink" is a clearer name since the function name is already "truncate"
but that concept includes both increasing or decreasing size.


Yes, fair enough. Was relying on what I consider the colloquial definition of truncate.

+
+    if (size == hb->size) {
+        /* A hard day's work */
+        return;
+    }
+
+    hb->size = size;
+    for (i = HBITMAP_LEVELS; i-- > 0; ) {
+        size = MAX((size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
+        if (hb->sizes[i] == size) {
+            continue;
+        }
+        old = hb->sizes[i];
+        hb->sizes[i] = size;

I was wondering what sizes[] is used for.  Not a very useful struct
field since it's only needed by this rarely called function.


In future patches, we tend to recalculate the size of each array a lot. I decided I wanted to cache it so we could stop duplicating that code over and over.

It comes up in migration and persistence a lot. It's easier to just add it now instead of allow the duplication to sneak in and then patch it out everywhere.

It would be clearer to calculate 'old' alongside 'size' each loop
iteration.  The size[] field can be dropped, 'old' becomes 'old_size',
and 'size' becomes 'new_size':

old_size = hb->size;
for (i = HBITMAP_LEVELS; i-- > 0; ) {
     old_size = MAX((old_size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
     new_size = MAX((new_size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);

+        hb->levels[i] = g_realloc_n(hb->levels[i], size, sizeof(unsigned 
long));
+        if (!truncate) {
+            memset(&hb->levels[i][old], 0x00,
+                   (size - old) * sizeof(*hb->levels[i]));
+        }
+    }
+    assert(size == 1);
+
+    /* Clear out any "extra space" we may have that the user didn't request:
+     * It may have garbage data in it, now. */
+    if (truncate) {
+        /* Due to granularity fuzziness, we may accidentally reset some of
+         * the last bits that are actually valid. So, record the current value,
+         * reset the "dead range," then re-set the one element we care about. 
*/
+        uint64_t fix_count = (hb->size << hb->granularity) - num_elements;
+        if (fix_count) {
+            bool set = hbitmap_get(hb, num_elements - 1);
+            hbitmap_reset(hb, num_elements, fix_count);
+            if (set) {
+                hbitmap_set(hb, num_elements - 1, 1);
+            }
+        }

Calling hbitmap_reset() with an out-of-bounds index seems hacky to me.


It's the simplest way to re-use the existing code to recursively clear out any bits that are set that shouldn't be.

Why doesn't the for loop's if (!truncate) have an else statement to mask
no longer visible bits?  Maybe I'm missing why that's hard to do.


I just didn't see a reason to replicate the logic of what hbitmap_reset already does, so I didn't bother to try.

Reply via email to