Am 28.09.2016 um 15:44 schrieb Holger Hoffstätte:

>> Good idea but it does not. I hope i can reproduce this with my already
>> existing testscript which i've now bumped to use a 37TB partition and
>> big files rather than a 15GB part and small files. If i can reproduce it
>> i can also check whether disabling compression fixes this.
> 
> Great. Remember to undo the compression on existing files, or create
> them from scratch.

I create files from scratch - but currently i can't trigger the problem
with my testscript. But even in production load it's not that easy. I
need to process 60-120 files before the error is triggered.

>> No that's not the case. No rsync nor inplace is involved. I'm dumping
>> differences directly from ceph and put them on top of a base image but
>> only for 7 days. So it's not endless fragmenting the file. After 7 days
>> a clean whole image is dumped.
> 
> That sounds sane but it's also not at all how you described things to me
> previosuly ;) But OK.
I'm sorry. May be my english is just bad, you got me wrong or was drunk
*joke*. It never changed.

> How do you "dump differences directly from
> Ceph"? I'd assume the VM images are RBDs, but it sounds you're somehow
> using overlayfs.

You can use rbd diff to export differences between two snapshots. So no
overlayfs involved.

> Anyway..something is off and you successfully cause it while other
> people apparently do not.
Sure - i know that. But i still don't want to switch to zfs.

> Do you still use those nonstandard mount
> options with extremely long transaction flush times?
No i removed commit=300 just to be sure they do not cause this issue.

Sure,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to