Re: btrfs system slow down with 100GB file

2021-04-30 Thread Richard Shaw
On Fri, Apr 30, 2021 at 7:56 PM Roger Heflin wrote: > 388217 * 10ms = about 3800 seconds to read that file or about > 26MB/sec, but with all of the seeks most of that time will be idle > time waiting on disk (iowait), and it is very possible that parts of > the file have large extents and other p

Re: btrfs system slow down with 100GB file

2021-04-30 Thread Roger Heflin
388217 * 10ms = about 3800 seconds to read that file or about 26MB/sec, but with all of the seeks most of that time will be idle time waiting on disk (iowait), and it is very possible that parts of the file have large extents and other parts of the file are horribly fragmented. And that ignores an

Re: btrfs system slow down with 100GB file

2021-04-30 Thread Richard Shaw
On Fri, Apr 30, 2021 at 6:16 PM Roger Heflin wrote: > I don't know why but the spinning disk is being crushed. > Well, a little googling after my post it appears the database is LMDB, which is a COW db. So I can see how a COW DB on top of a COW FS may be a problem, but I have marked the director

Re: btrfs system slow down with 100GB file

2021-04-30 Thread Roger Heflin
I don't know why but the spinning disk is being crushed. if you divide the mb/sec by the reads you get around 4k per read (that is about as bad as you could do). if you multiply the reads/sec * r_await you get all of the time accounted for. And since each read is taking around 8-10ms (around the

Re: btrfs system slow down with 100GB file

2021-04-30 Thread Richard Shaw
A little thread necro... I stopped the blockchain daemon for a while and recently restarted it and am now seeing GUI freezes while it resyncs even though the file itself is marked +C... Here's the output of the requested commands: https://pastebin.com/9i0DaVpf Thanks, Richard ___

Re: btrfs system slow down with 100GB file

2021-03-26 Thread Chris Murphy
On Fri, Mar 26, 2021 at 4:00 PM Roberto Ragusa wrote: > Well, there is no reason for fsync to block everything else. In practice, it does. There's only one thing happening at a time with with a HDD, so while the write is flushing, nothing else is going to get either a read or a write in, which i

Re: btrfs system slow down with 100GB file

2021-03-26 Thread Chris Murphy
On Thu, Mar 25, 2021 at 7:26 PM Chris Murphy wrote: > > The problem is well understood for some time. > https://lwn.net/Articles/572911/ This is an update on that 8 year old story. Last year writebehind patches were proposed, and a discussion ensued. https://lore.kernel.org/linux-mm/CAHk-=whf2bq

Re: btrfs system slow down with 100GB file

2021-03-26 Thread Roberto Ragusa
On 3/26/21 2:26 AM, Chris Murphy wrote: If you have 40G of dirty data and your program says "fsync it" you've got 40G of data that has been ordered flushed to stable media. Everything else wanting access is going to come close to stopping. That's the way it works. You don't get to "fsync this ve

Re: btrfs system slow down with 100GB file

2021-03-26 Thread Chris Murphy
On Thu, Mar 25, 2021 at 7:26 PM Chris Murphy wrote: > > The defaults are crazy. > https://lwn.net/Articles/572921/ > > Does this really make a difference though outside the slow USB stick > example? I don't know. Seems like it won't for fsync heavy handedness > because that'll take precedence. T

Re: btrfs system slow down with 100GB file

2021-03-26 Thread Chris Murphy
On Thu, Mar 25, 2021 at 6:39 AM Richard Shaw wrote: > > On Wed, Mar 24, 2021 at 11:05 PM Chris Murphy wrote: >> Append writes are the same on overwriting and cow file systems. You >> might get slightly higher iowait because datacow means datasum which >> means more metadata to write. But that's

Re: btrfs system slow down with 100GB file

2021-03-26 Thread Chris Murphy
On Thu, Mar 25, 2021 at 8:59 AM Roberto Ragusa wrote: > > On 3/25/21 4:25 AM, Chris Murphy wrote: > > > It might be appropriate to set dirty_bytes to 500M across the board, > > desktop and server. And dirty_background to 1/4 that. But all of these > > are kinda rudimentary guides. What we really w

Re: btrfs system slow down with 100GB file

2021-03-25 Thread Roberto Ragusa
On 3/25/21 4:25 AM, Chris Murphy wrote: It might be appropriate to set dirty_bytes to 500M across the board, desktop and server. And dirty_background to 1/4 that. But all of these are kinda rudimentary guides. What we really want is something that knows what the throughput of the storage is, and

Re: btrfs system slow down with 100GB file

2021-03-25 Thread Richard Shaw
On Wed, Mar 24, 2021 at 11:05 PM Chris Murphy wrote: > On Wed, Mar 24, 2021 at 6:09 AM Richard Shaw wrote: > > > > I was syncing a 100GB blockchain, which means it was frequently getting > appended to, so COW was really killing my I/O (iowait > 50%) but I had > hoped that marking as nodatacow wo

Re: btrfs system slow down with 100GB file

2021-03-24 Thread Chris Murphy
On Wed, Mar 24, 2021 at 6:09 AM Richard Shaw wrote: > > I was syncing a 100GB blockchain, which means it was frequently getting > appended to, so COW was really killing my I/O (iowait > 50%) but I had hoped > that marking as nodatacow would be a 100% fix, however iowait would be quite > low but

Re: btrfs system slow down with 100GB file

2021-03-24 Thread Chris Murphy
On Wed, Mar 24, 2021 at 4:29 AM John Mellor wrote: > > With Fedora being intended as a desktop platform, why are these settings > not the default? > > The highest priority for a desktop system is to keep the user experience > flowing smoothly, not to maximize disk i/o rates. > > Can this be fixed

Re: btrfs system slow down with 100GB file

2021-03-24 Thread Roger Heflin
Well, while it is not a great idea, it is better than what is going to happen if you don't prevent them from writing, or if you let the write buffer get so large going from the high to lower water mark takes too long. if you never stop the writes then eventually the kernel will oom. And really ab

Re: btrfs system slow down with 100GB file

2021-03-24 Thread Richard Shaw
On Tue, Mar 23, 2021 at 7:11 PM Chris Murphy wrote: > On Tue, Mar 23, 2021 at 8:39 AM Richard Shaw wrote: > > > > I'm getting significant iowait while writing to a 100GB file. > > High iowait means the system is under load and not CPU bound but IO > bound. It sounds like the drive is writing as

Re: btrfs system slow down with 100GB file

2021-03-24 Thread Roberto Ragusa
On 3/24/21 11:27 AM, John Mellor wrote: With Fedora being intended as a desktop platform, why are these settings not the default? Because they are ugly workarounds for something that is broken elsewhere. Seriously, telling the kernel that it should stop applications attempting to write to fi

Re: btrfs system slow down with 100GB file

2021-03-24 Thread John Mellor
With Fedora being intended as a desktop platform, why are these settings not the default? The highest priority for a desktop system is to keep the user experience flowing smoothly, not to maximize disk i/o rates. Can this be fixed in time for F34?  Do we need a bug report? On 3/23/21 2:26 P

Re: btrfs system slow down with 100GB file

2021-03-23 Thread Chris Murphy
On Tue, Mar 23, 2021 at 8:39 AM Richard Shaw wrote: > > I'm getting significant iowait while writing to a 100GB file. High iowait means the system is under load and not CPU bound but IO bound. It sounds like the drive is writing as fast as it can. What's the workload? Reproduce the GUI stalls a

Re: btrfs system slow down with 100GB file

2021-03-23 Thread Roger Heflin
This won't speed up the actual IO but it should reduce the impact on other work. if you aren't familiar, man sysctl to understand how to apply the below settings. set these 2: vm.dirty_background_bytes = 300 vm.dirty_bytes = 500 They will be 0 to start with and these 2 settings will be

btrfs system slow down with 100GB file

2021-03-23 Thread Richard Shaw
I'm getting significant iowait while writing to a 100GB file. I have already made it nocow by copying it to another directory, marking the director nocow (+C) and using cat to re-create it from scratch. I was under the impression that this should fix the problem. On a tangent, it took about 30