Michał Sokołowski posted on Thu, 31 Aug 2017 16:38:14 +0200 as excerpted:

> On 08/31/2017 01:18 PM, Austin S. Hemmelgarn wrote:
>> [...]
>>> Any hint here?
>> Having compression enabled causes no issues with defray and balance.
>> There appears to be a prevalent belief however that defrag is pointless
>> if you're using compression, probably because some versions of
>> `filefrag` don't report compressed extents properly (they list each
>> 128k compressed unit as one extent, which is wrong).
> Is there another tool to verify fragments number of given file when
> using compression?

AFAIK there isn't an official one, tho someone posted a script (python, 
IIRC) at one point and may repost it here.

You can actually get the information needed from filefrag -v (and the 
script does), but it takes a bit more effort than usual, scripted or 
brain-power, to convert the results into real fragmentation numbers.

The problem is that btrfs compression works in 128 KiB blocks, and 
filefrag sees each of those as a fragment.  The extra effort involves 
checking the addresses of the reported 128 KiB blocks to see if they are 
actually contiguous, that is, one starts just after the previous one 
ends.  If so it's actually not fragmented at that point.  But if the 
addresses aren't contiguous, there's fragmentation at that point.

I don't personally worry too much about it here, for two reasons.  First, 
I /always/ run with the autodefrag mount option, which keeps 
fragmentation manageable in any case[1], and second, I'm on ssd, where 
the effects of fragmentation aren't as pronounced.  (On spinning rust 
it's generally the seek times that dominate.  On ssds that's 0, but 
there's still an IOPS cost.)

So while I've run filefrag -v and looked at the results a few times out 
of curiousity, and indeed could see the reported fragmentation that was 
actually contiguous, it was simply a curiosity to me, thus my not 
grabbing that script and putting it to immediate use.

---
[1] AFAIK autodefrag checks fragmentation on writes, and rewrites 16 MiB 
blocks if necessary.  If like me you always run it from the moment you 
start putting data on the filesystem, that should work pretty well.  If 
however you haven't been running it or doing manual defrag, because 
defrag only works on writes and the free space may be fragmented enough 
there's not 16 MiB blocks to write into, it may take awhile to "catch 
up", and of course won't defrag anything that's never written to again, 
but is often reread, making its existing fragmentation an issue.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to