I've run into a few systems where we start getting immediate ENOSPC
errors on any operation as soon as we update to a recent kernel.
These are all small filesystems (not MIXED), which should have had
plenty of free metadata space but no unallocated chunks.

I was able to trace this back to commit ae2e47288165 (Btrfs: change
how we calculate the global block rsv).  This commit changes from
guessing how much reserve space we need, to using the sum of the
reported sizes of the extent, csum, and tree roots.  But on the
affected systems, the extent tree root item claimed to be very large
-- even larger than the total amount of metadata used.  This caused
the global reserve to crowd out any new metadata allocation, resulting
in ENOSPC errors.

I have no idea how, when, or why the chunk tree root item's bytes_used
value got so out of whack; and I haven't been able to reproduce it so
far on a fresh system.  But in any case, it looks like we'll need to
sanity-check it.  My main question is, what is a reasonable way to
sanity-check it?  The old code did:

    if (num_bytes * 3 > meta_used)
            num_bytes = div_u64(meta_used, 3);

But adding that breaks generic/333 again.  Changing 3 to 2 works, but
then I'm just guessing.  What would be appropriate?

-Justin
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to