Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-17 Thread Christoph Anton Mitterer
[I'm combining the messages again, since I feel a bit bad, when I write so many mails to the list ;) ] But from my side, feel free to split up as much as you want (perhaps not single characters or so ;) ) On Thu, 2015-12-17 at 04:06 +, Duncan wrote: > Just to mention here, that I said

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Christoph Anton Mitterer
On Wed, 2015-12-09 at 16:36 +, Duncan wrote: > But... as I've pointed out in other replies, in many cases including > this > specific one (bittorrent), applications have already had to develop > their > own integrity management features Well let's move discussion upon that into the "dear

Re: btrfs: poor performance on deleting many large files

2015-12-16 Thread Christoph Anton Mitterer
On Sun, 2015-12-13 at 07:10 +, Duncan wrote: > > So you basically mean that ro snapshots won't have their atime > > updated > > even without noatime? > > Well I guess that was anyway the recent behaviour of Linux > > filesystems, > > and only very old UNIX systems updated the atime even when

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Christoph Anton Mitterer
On Mon, 2015-12-14 at 10:51 +, Duncan wrote: > > AFAIU, the one the get's fragmented then is the snapshot, right, > > and the > > "original" will stay in place where it was? (Which is of course > > good, > > because one probably marked it nodatacow, to avoid that > > fragmentation > > problem

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Kai Krakow
Am Wed, 9 Dec 2015 13:36:01 + (UTC) schrieb Duncan <1i5t5.dun...@cox.net>: > >> > 4) Duncan mentioned that defrag (and I guess that's also for > >> > auto- defrag) isn't ref-link aware... > >> > Isn't that somehow a complete showstopper? > > >> It is, but the one attempt at dealing with it

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Duncan
Christoph Anton Mitterer posted on Wed, 16 Dec 2015 22:59:01 +0100 as excerpted: >> And there very well might be such a tool... five or ten years down the >> road when btrfs is much more mature and generally stabilized, well >> beyond the "still maturing and stabilizing" status of the moment. >

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Duncan
Christoph Anton Mitterer posted on Wed, 16 Dec 2015 22:59:01 +0100 as excerpted: > In kinda curios, what free space fragmentation actually means here. > > Ist simply like this: > +--+-+---++ > |     F    |  D  | F |    D   | > +--+-+---++ > Where D is data

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Duncan
Christoph Anton Mitterer posted on Wed, 16 Dec 2015 22:59:01 +0100 as excerpted: > I'm a bit unsure how to read filefrag's output... (even in the > uncompressed case). > What would it show me if there was fragmentation /path/to/file: 18 extents found It tells you the number of extents found.

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Duncan
Christoph Anton Mitterer posted on Wed, 16 Dec 2015 22:59:01 +0100 as excerpted: >> he obviously didn't think thru the fact that compression MUST be a >> rewrite, thereby breaking snapshot reflinks, even were normal >> non-compression defrag to be snapshot aware, because compression >>

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Duncan
Christoph Anton Mitterer posted on Wed, 16 Dec 2015 22:59:01 +0100 as excerpted: >> It's certainly in quite a few on-list posts over the years > okay,.. in other words: no ;-) > scatter over the years list posts don't count as documentation :P =:^) -- Duncan - List replies preferred. No

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-16 Thread Duncan
Christoph Anton Mitterer posted on Wed, 16 Dec 2015 22:59:01 +0100 as excerpted: > On Wed, 2015-12-09 at 16:36 +, Duncan wrote: >> But... as I've pointed out in other replies, in many cases including >> this specific one (bittorrent), applications have already had to >> develop their own

Re: btrfs: poor performance on deleting many large files

2015-12-16 Thread Duncan
Lionel Bouton posted on Tue, 15 Dec 2015 03:38:33 +0100 as excerpted: > I just checked: this has only be made crystal-clear in the latest > man-pages version 4.03 released 10 days ago. > > The mount(8) page of Gentoo's current stable man-pages (4.02 release in > August) which is installed on my

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Lionel Bouton
Le 15/12/2015 02:49, Duncan a écrit : > Christoph Anton Mitterer posted on Tue, 15 Dec 2015 00:25:05 +0100 as > excerpted: > >> On Mon, 2015-12-14 at 22:30 +0100, Lionel Bouton wrote: >> >>> I use noatime and nodiratime >> FYI: noatime implies nodiratime :-) > Was going to post that myself. Is

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Duncan
Christoph Anton Mitterer posted on Tue, 15 Dec 2015 00:25:05 +0100 as excerpted: > On Mon, 2015-12-14 at 22:30 +0100, Lionel Bouton wrote: > >> I use noatime and nodiratime > FYI: noatime implies nodiratime :-) Was going to post that myself. Is there some reason you: a) use nodiratime when

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Duncan
Austin S. Hemmelgarn posted on Mon, 14 Dec 2015 15:27:11 -0500 as excerpted: > FWIW, both Duncan and I have our own copy of the sources patched to > default to noatime, and I know a number of embedded Linux developers who > do likewise, and I've even heard talk in the past of some distributions >

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-14 Thread Duncan
Christoph Anton Mitterer posted on Mon, 14 Dec 2015 02:44:55 +0100 as excerpted: > Two more on these: > > On Thu, 2015-11-26 at 00:33 +, Hugo Mills wrote: >> 3) When I would actually disable datacow for e.g. a subvolume that >> > holds VMs or DBs... what are all the implications? >> After

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-14 Thread Duncan
Christoph Anton Mitterer posted on Mon, 14 Dec 2015 03:46:01 +0100 as excerpted: >> Same here.  In fact, my most anticipated feature is N-way-mirroring, > Hmm ... not totally sure about that... > AFAIU, N-way-mirroring is what currently the currently wrongly called > RAID1 is in btrfs, i.e.

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Chris Murphy
On Mon, Dec 14, 2015 at 7:24 AM, Austin S. Hemmelgarn wrote: > > If you have software that actually depends on atimes, then that software is > broken (and yes, I even feel this way about Mutt). The way atimes are > implemented on most systems breaks the semantics that

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Austin S. Hemmelgarn
On 2015-12-12 17:15, Christoph Anton Mitterer wrote: On Sat, 2015-11-28 at 06:49 +, Duncan wrote: Christoph Anton Mitterer posted on Sat, 28 Nov 2015 04:57:05 +0100 as excerpted: Still, specifically for snapshots that's a bit unhandy, as one typically doesn't mount each of them... one

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Lionel Bouton
Le 14/12/2015 21:27, Austin S. Hemmelgarn a écrit : > AFAIUI, the _only_ reason that that is still the default is because of > Mutt, and that won't change as long as some of the kernel developers > are using Mutt for e-mail and the Mutt developers don't realize that > what they are doing is

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Austin S. Hemmelgarn
On 2015-12-14 14:39, Christoph Anton Mitterer wrote: On Mon, 2015-12-14 at 09:24 -0500, Austin S. Hemmelgarn wrote: Unless things have changed very recently, even many modern systems update atime on read-only filesystems, unless the media itself is read-only. Seriously? Oh... *sigh*... You

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Christoph Anton Mitterer
On Mon, 2015-12-14 at 09:24 -0500, Austin S. Hemmelgarn wrote: > Unless things have changed very recently, even many modern systems > update atime on read-only filesystems, unless the media itself is > read-only. Seriously? Oh... *sigh*... You mean as in Linux, ext*, xfs? > If you have software

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Christoph Anton Mitterer
On Mon, 2015-12-14 at 15:27 -0500, Austin S. Hemmelgarn wrote: > On 2015-12-14 14:39, Christoph Anton Mitterer wrote: > > On Mon, 2015-12-14 at 09:24 -0500, Austin S. Hemmelgarn wrote: > > > Unless things have changed very recently, even many modern > > > systems > > > update atime on read-only

project idea: per-object default mount-options / more btrfs-properties / chattr attributes (was: btrfs: poor performance on deleting many large files)

2015-12-14 Thread Christoph Anton Mitterer
Just FYI: On Mon, 2015-12-14 at 15:27 -0500, Austin S. Hemmelgarn wrote: > > My idea would be basically, that having a noatime btrfs-property, > > which > > is perhaps even set automatically, would be an elegant way of doing > > that. > > I just haven't had time to properly write that up and add

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Christoph Anton Mitterer
On Mon, 2015-12-14 at 22:30 +0100, Lionel Bouton wrote: > Mutt is often used as an example but tmpwatch uses atime by default > too > and it's quite useful. Hmm one could probably argue that these few cases justify the use of separate filesystems (or btrfs subvols ;) ), so that the majority could

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-13 Thread Christoph Anton Mitterer
On Wed, 2015-12-09 at 13:36 +, Duncan wrote: > Answering the BTW first, not to my knowledge, and I'd be > skeptical.  In > general, btrfs is cowed, and that's the focus.  To the extent that > nocow > is necessary for fragmentation/performance reasons, etc, the idea is > to > try to make cow

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-13 Thread Christoph Anton Mitterer
Two more on these: On Thu, 2015-11-26 at 00:33 +, Hugo Mills wrote: > 3) When I would actually disable datacow for e.g. a subvolume that > > holds VMs or DBs... what are all the implications? > > Obviously no checksumming, but what happens if I snapshot such a > > subvolume or if I

Re: btrfs: poor performance on deleting many large files

2015-12-12 Thread Duncan
Christoph Anton Mitterer posted on Sat, 12 Dec 2015 23:15:38 +0100 as excerpted: > On Sat, 2015-11-28 at 06:49 +, Duncan wrote: >> Christoph Anton Mitterer posted on Sat, 28 Nov 2015 04:57:05 +0100 as >> excerpted: >> > Still, specifically for snapshots that's a bit unhandy, as one >> >

Re: btrfs: poor performance on deleting many large files

2015-12-12 Thread Christoph Anton Mitterer
On Sat, 2015-11-28 at 06:49 +, Duncan wrote: > Christoph Anton Mitterer posted on Sat, 28 Nov 2015 04:57:05 +0100 as > excerpted: > > Still, specifically for snapshots that's a bit unhandy, as one > > typically > > doesn't mount each of them... one rather mount e.g. the top level > > subvol >

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-09 Thread Duncan
Christoph Anton Mitterer posted on Wed, 09 Dec 2015 06:43:01 +0100 as excerpted: > Hey Hugo, > > > On Thu, 2015-11-26 at 00:33 +, Hugo Mills wrote: > >> The issue is that nodatacow bypasses the transactional nature of >> the FS, making changes to live data immediately. This then means that

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-09 Thread Duncan
Christoph Anton Mitterer posted on Wed, 09 Dec 2015 06:45:47 +0100 as excerpted: > On 2015-11-27 00:08, Duncan wrote: >> Christoph Anton Mitterer posted on Thu, 26 Nov 2015 01:23:59 +0100 as >> excerpted: >>> 1) AFAIU, the fragmentation problem exists especially for those files >>> that see many

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-08 Thread Christoph Anton Mitterer
Hey Hugo, On Thu, 2015-11-26 at 00:33 +, Hugo Mills wrote: >    Answering the second part first, no, it can't. Thanks so far :) >    The issue is that nodatacow bypasses the transactional nature of > the FS, making changes to live data immediately. This then means that > if you modify a

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-12-08 Thread Christoph Anton Mitterer
On 2015-11-27 00:08, Duncan wrote: > Christoph Anton Mitterer posted on Thu, 26 Nov 2015 01:23:59 +0100 as > excerpted: >> 1) AFAIU, the fragmentation problem exists especially for those files >> that see many random writes, especially, but not limited to, big files. >> Now that databases and VMs

Re: btrfs: poor performance on deleting many large files

2015-11-27 Thread Christoph Anton Mitterer
On Fri, 2015-11-27 at 03:38 +, Duncan wrote: > AFAIK, per-subvolume *atime mounts should already be working. Ah I see. :) Still, specifically for snapshots that's a bit unhandy, as one typically doesn't mount each of them... one rather mount e.g. the top level subvol and has a subdir

Re: btrfs: poor performance on deleting many large files

2015-11-27 Thread Duncan
Christoph Anton Mitterer posted on Sat, 28 Nov 2015 04:57:05 +0100 as excerpted: > On Fri, 2015-11-27 at 03:38 +, Duncan wrote: >> AFAIK, per-subvolume *atime mounts should already be working. > Ah I see. :) > > Still, specifically for snapshots that's a bit unhandy, as one typically >

Re: btrfs: poor performance on deleting many large files

2015-11-26 Thread Duncan
Christoph Anton Mitterer posted on Fri, 27 Nov 2015 01:06:45 +0100 as excerpted: > And additionally, allow people to mount subvols with different > noatime/relatime/atime settings (unless that's already working)... that > way, they could enable it for things where they want/need it,... and >

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-11-26 Thread Duncan
Christoph Anton Mitterer posted on Thu, 26 Nov 2015 01:23:59 +0100 as excerpted: > Hey. > > I've worried before about the topics Mitch has raised. > Some questions. > > 1) AFAIU, the fragmentation problem exists especially for those files > that see many random writes, especially, but not

Re: btrfs: poor performance on deleting many large files

2015-11-26 Thread Duncan
Christoph Anton Mitterer posted on Thu, 26 Nov 2015 19:25:47 +0100 as excerpted: > On Thu, 2015-11-26 at 16:52 +, Duncan wrote: >> For people doing snapshotting in particular, atime updates can be a big >> part of the differences between snapshots, so it's particularly >> important to set

Re: btrfs: poor performance on deleting many large files

2015-11-26 Thread Christoph Anton Mitterer
On Thu, 2015-11-26 at 23:29 +, Duncan wrote: > > but only on meta-data blocks, right? > Yes. Okay... so it'll at most get the whole meta-data for a snapshot separately and not shared anymore... And when these are chained as in ZFS,.. it probably amplifies... i.e. a change deep down in the tree

Re: btrfs: poor performance on deleting many large files

2015-11-26 Thread Qu Wenruo
Mitchell Fossen wrote on 2015/11/25 15:49 -0600: On Mon, 2015-11-23 at 06:29 +, Duncan wrote: Using subvolumes was the first recommendation I was going to make, too, so you're on the right track. =:^) Also, in case you are using it (you didn't say, but this has been demonstrated to

Re: btrfs: poor performance on deleting many large files

2015-11-26 Thread Duncan
Mitchell Fossen posted on Wed, 25 Nov 2015 15:49:58 -0600 as excerpted: > Also, is there a recommendation for relatime vs noatime mount options? I > don't believe anything that runs on the server needs to use file access > times, so if it can help with performance/disk usage I'm fine with >

Re: btrfs: poor performance on deleting many large files

2015-11-26 Thread Christoph Anton Mitterer
On Thu, 2015-11-26 at 16:52 +, Duncan wrote: > For people doing snapshotting in particular, atime updates can be a > big > part of the differences between snapshots, so it's particularly > important > to set noatime if you're snapshotting. What everything happens when that is left at

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-11-25 Thread Hugo Mills
On Thu, Nov 26, 2015 at 01:23:59AM +0100, Christoph Anton Mitterer wrote: > 2) Why does notdatacow imply nodatasum and can that ever be decoupled? Answering the second part first, no, it can't. The issue is that nodatacow bypasses the transactional nature of the FS, making changes to live

Re: [auto-]defrag, nodatacow - general suggestions?(was: btrfs: poor performance on deleting many large files?)

2015-11-25 Thread Christoph Anton Mitterer
Hey. I've worried before about the topics Mitch has raised. Some questions. 1) AFAIU, the fragmentation problem exists especially for those files that see many random writes, especially, but not limited to, big files. Now that databases and VMs are affected by this, is probably broadly known in

Re: btrfs: poor performance on deleting many large files

2015-11-25 Thread Mitchell Fossen
On Mon, 2015-11-23 at 06:29 +, Duncan wrote: > Using subvolumes was the first recommendation I was going to make, too, > so you're on the right track. =:^) > > Also, in case you are using it (you didn't say, but this has been > demonstrated to solve similar issues for others so it's worth

Re: btrfs: poor performance on deleting many large files

2015-11-23 Thread Austin S Hemmelgarn
On 2015-11-22 20:43, Mitch Fossen wrote: Hi all, I have a btrfs setup of 4x2TB HDDs for /home in btrfs RAID0 on Ubuntu 15.10 (kernel 4.2) and btrfs-progs 4.3.1. Root is on a separate SSD also running btrfs. About 6 people use it via ssh and run simulations. One of these simulations generates a

btrfs: poor performance on deleting many large files

2015-11-22 Thread Mitch Fossen
Hi all, I have a btrfs setup of 4x2TB HDDs for /home in btrfs RAID0 on Ubuntu 15.10 (kernel 4.2) and btrfs-progs 4.3.1. Root is on a separate SSD also running btrfs. About 6 people use it via ssh and run simulations. One of these simulations generates a lot of intermediate data that can be

Re: btrfs: poor performance on deleting many large files

2015-11-22 Thread Duncan
Mitch Fossen posted on Sun, 22 Nov 2015 19:43:28 -0600 as excerpted: > Hi all, > > I have a btrfs setup of 4x2TB HDDs for /home in btrfs RAID0 on Ubuntu > 15.10 (kernel 4.2) and btrfs-progs 4.3.1. Root is on a separate SSD also > running btrfs. > > About 6 people use it via ssh and run