On Jan 5, 2014, at 12:57 PM, Duncan <1i5t5.dun...@cox.net> wrote:

> 
> But I do very little snapshotting here, and as a result hadn't considered 
> the knockon effect of 100K-plus extents in perhaps 1000 snapshots.

I wonder if this is an issue with snapshot aware defrag? Some problems were 
fixed recently but I'm not sure of the status.

The OP's case involves Btrfs on LVM on (I think) md raid5. The mdadm default 
stripe size is 512KB, which would be a 1MB full stripe. There are some 
optimizations for non-full stripe reads and writes for raid5 (not for raid6 so 
it takes a much bigger performance hit) but nevertheless it might be a factor.

>  I 
> guess that's what's killing the defrag, however it's initiated.  The only 
> way to get rid of the problem, then, would be to move the file away and 
> then back, but doing so does still leave all those snapshots with the 
> crazy fragmentation, and to kill that would require either killing all 
> those snapshots, or setting them writable and doing the same move out, 
> move back, on each one!  OUCH, but I guess that's why it just seems 
> impossible to deal with the fragmentation on these things, whether it's 
> autodefrag, or named file defrag, or doing the whole move out and back 
> thing, and then having to worry about all those snapshots.

It's why in the short term I'm using +C from the get go. And if I had more VM 
images and qcow2 snapshots, I would put them in a subvolume of their own so 
that they aren't snapshotted along with rootfs. Using Btrfs within the VM I 
still get the features I expect and the performance is quite good.


Chris Murphy

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to