On Sun, 16.04.17 14:30, Chris Murphy (li...@colorremedies.com) wrote:

> Hi,
> 
> This is on a Fedora 26 workstation (systemd-233-3.fc26.x86_64) that's
> maybe a couple weeks old and was clean installed. Drive is NVMe.
> 
> 
> # filefrag *
> system.journal: 9283 extents found
> user-1000.journal: 3437 extents found
> # lsattr
> ----------------C-- ./system.journal
> ----------------C-- ./user-1000.journal
> 
> I do manual snapshots before software updates, which means new writes
> to these files are subject to COW, but additional writes to the same
> extents are overwrites and are not COW because of chattr +C. I've used
> this same strategy for a long time, since systemd-journald defaults to
> +C for journal files; but I've not seen them get this fragmented this
> quickly.
>

IIRC NOCOW only has an effect if set right after the file is created
before the first write to it is done. Or in other words, you cannot
retroactively make a file NOCOW. This means that if you in one way or
another make a COW copy of a file (through reflinking — implicit or
not, note that "cp" reflinks by default — or through snapshotting or
something else) the file is COW and you'll get fragmentation.

I am not entirely sure what to recommend you. Ultimately whether btrfs
fragments or not, is probably something you have to discuss with the
btrfs folks. We do try to make the best of btrfs, by managing the COW
flag, but this only helps you to a limited degree as
snapshots/reflinks will fuck things up anyway...

We also ask btrfs to defrag the file as soon as we mark it as
archived... I'd even be willing to extend on that, and defrag the file
on other events too, for example if it ends up being too heavily
fragmented. But last time I looked btrfs didn't have any nice API for
that, that would have a clear focus on a single file only...

Lennart

-- 
Lennart Poettering, Red Hat
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to