On 23 February 2016 at 18:26, Marc MERLIN <m...@merlins.org> wrote:
>
> I'm currently doing a very slow defrag to see if it'll help (looks like
> it's going to take days).
> I'm doing this:
> for i in dir1 dir2 debian32 debian64 ubuntu dir4 ; do echo $i; time btrfs fi 
> defragment -v -r $i; done
[snip]
> Also, should I try running defragment -r from cron from time to time?

I find the default threshold a bit low and defragment daily with "-t
1m" to combat heavy random write fragmentation.

Once in a while I defrag e.g. VM disk images with "-t 128m" but find
higher thresholds mostly a waste of time.

YMMV.


> But, just to be clear, is there a way I missed to see how fragmented my
> filesystem is without running filefrag on millions of files and parsing
> the output?

I don't think so, and filefrag is slow with heavily fragmented files
because ioctl(FS_IOC_FIEMAP) is called many times with a buffer which
only fits 292 fiemap_extents.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to