Patrik Lundquist posted on Wed, 24 Jun 2015 14:05:57 +0200 as excerpted:

> On 24 June 2015 at 12:46, Duncan <1i5t5.dun...@cox.net> wrote:
>> Patrik Lundquist posted on Wed, 24 Jun 2015 10:28:09 +0200 as
>> excerpted:
>>
>> AFAIK, it's set huge to defrag everything,
> 
> It's set to 256K by default.

What I meant is that AFAIK, set it huge to defrag everything...

>> Assuming "set a huge -t to defrag to the maximum extent possible" is
>> correct, that means -t 1G should be exactly as effective as -t 1T...
> 
> 1G is actually more effective because 1T overflows the uint32
> extent_thresh field, so 1T, 0, and 256K are currently the same.

Then the manpage needs some work (in addition to the more serious 
ambiguity over whether 1 or 1G means defrag everything), since it 
mentions upto petabyte (P), without any indication that setting anything 
that large won't work as expected.

If it's uint32 limited, either kill everything above that in both the 
documentation and code, or alias everything above that to 3G (your next 
paragraph) or whatever.

> 3G is the largest value that works with -t as expected (disregarding the
> man page) and is easy to type.
> 
> 
>> But btrfs or ext4, 31 extents ideal or a single extent ideal, 150
>> extents still indicates at least some remaining fragmentation.
> 
> I gave it another shot but I've now got 154 extents instead. :-)

Is it possible there's simply no gig-size free-space holes in the 
filesystem allocation, so it simply /can't/ defrag further than that, 
because there's no place to allocate whole-gig data chunks at a time?

Which brings up a more general defrag functionality question.  For multi-
gig files, does btrfs fi defrag allocate fresh data chunks in ordered to 
create the largest extents possible (possibly after filling the remainder 
of the original first chunk), thereby increasing data chunk allocation 
before fully using currently allocated chunks, or does it try to find the 
biggest extents possible in currently allocated chunks, first, and only 
allocate new chunks when all current allocation is full?

Obviously if it uses up current allocations first, that could explain 
your problem.  OTOH, if either defrag or general allocation strategy 
favors new chunks for large extents when necessary, that would explain 
the "deoptimization" some people report from running balance.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to