On 2019-09-11 7:21 p.m., webmas...@zedlx.com wrote:

> 
> For example, lets examine the typical home user. If he is using btrfs,
> it means he probably wants snapshots of his data. And, after a few
> snapshots, his data is fragmented, and the current defrag can't help
> because it does a terrible job in this particualr case.
> 

I shouldn't be replying to your provocative posts, but this is just
nonsense.

 Not to say that Defragmentation can't be better, smarter,, it happens
to work very well for typical use.

This sounds like you're implying that snapshots fragment data... can you
explain that?  as far as I know, snapshotting has nothing to do with
fragmentation of data.  All data is COW, and all files that are subject
to random read write will be fragmented, with or without snapshots.

And running defrag on your system regularly works just fine.  There's a
little overhead of space if you are taking regular snapshots, (say
hourly snapshots with snapper.)  If you have more control/liberty when
you take your snapshots, ideally, you would defrag before taking the
snaptshop/reflink copy.  Again, this only matters to files that are
subject to fragmentation in the first place.

I suspect if you actually tried using the btrfs defrag, you would find
you are making a mountain of a molehill.. There are lots of far more
important problems to solve.

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to