Robert White posted on Tue, 09 Dec 2014 16:01:02 -0800 as excerpted:

> On 12/09/2014 03:48 PM, Robert White wrote:
>> On 12/09/2014 02:29 PM, Patrik Lundquist wrote:
>>> (stuff depicting a nearly full file system).
>>
>> Having taken another look at it all, I'd bet (there is not sufficient
>> information to be _sure_ from the output you've provided) that you
>> don't have the necessary 1Gb free on your disk slice to allocate
>> another data extent.

[snip most of both quote levels]

> Full filesystems always get into corner cases.

But, from the content you snipped from his post, this from btrfs fi show:

>>> Label: none  uuid: 770fe01d-6a45-42b9-912e-e8f8b413f6a4
>>>    Total devices 1 FS bytes used 1.35TiB
>>>    devid    1 size 2.73TiB used 1.36TiB path /dev/sdc1

Device 2.73 TiB, used only 1.36 TiB.

That's over a TiB of entirely unallocated space, so a mere 1 GiB chunk 
allocation shouldn't be a problem.


I'm sticking with my original hypothesis (assuming this is a continuation 
from the thread I think it was), that there's something about the 
conversion from ext* that didn't work correctly; most likely a file 
larger than the btrfs 1 GiB data-chunk size, that has an extent larger 
than that size as well.  Btrfs balance couldn't do anything with that, as 
it's larger than the native 1 GiB data-chunk size and balance alone 
doesn't know how to split it up.

The recursive btrfs defrag after deleting the saved ext* subvolume 
_should_ have split up any such > 1 GiB extents so balance could deal 
with them, but either it failed for some reason on at least one such 
file, or there's some other weird corner-case going on, very likely 
something else having to do with the conversion.

Patrik, assuming no btrfs snapshots yet, can you do a du --all --block-
size=1M | sort -n (or similar), then take a look at all results over 1024 
(1 GiB since the du specified 1 MiB blocks), and see if it's reasonable 
to move all those files out of the filesystem and back?  Assuming there's 
not too many of them, the idea is to kill the copy in the filesystem by 
moving them elsewhere, then move them back so they get recreated using 
native btrfs semantics -- no extents larger than the native btrfs data 
chunk size of 1 GiB.

If you have lots of memory to work with, one method would be to create a 
tmpfs, then /copy/ the files to tmpfs and /move/ them back to a temporary 
tree on the btrfs, deleting the originals on btrfs only after the move 
back from tmpfs and a sync (or btrfs fi sync) so there's always a 
permanent copy if the machine should crash and take down the tmpfs with 
it.  After all the files have been processed and the originals deleted 
you can then move the contents of the temporary tree back into the 
original location.

That should ensure no more > 1 GiB file extents and will I hope get rid 
of the problem, as this workaround has been demonstrated to fix problems 
other people had with converted-from-ext* btrfs, generally where they had 
failed to run the defrag right after the conversion, and now had a bunch 
more data on the filesystem and didn't want to have to defrag it too.  
Obviously it works best when there's only a handful of > 1 GiB files, 
however, and snapshots containing references to the affected files will 
prevent the file delete from actually deleting the problematic extents.

With luck that'll allow a full 100% balance without error.  If not, at 
least it should eliminate the > 1 GiB file extents possibility, and the 
focus can move to something else.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to