i've backported the free space cache tree to my kerne and hopefully any
fixes related to it.

The first mount with clear_cache,space_cache=v2 took around 5 hours.

Currently i do not see any kworker with 100CPU but i don't see much load
at all.

btrfs-transaction tooks around 2-4% CPU together with a kworker process
and some 2-3% mdadm processes. I/O Wait is at 3%.

That's it. It does not do much more. Writing a file does not work.

Greets,
Stefan

Am 16.08.2017 um 14:29 schrieb Konstantin V. Gavrilenko:
> Roman, initially I had a single process occupying 100% CPU, when sysrq it was 
> indicating as "btrfs_find_space_for_alloc"
> but that's when I used the autodefrag, compress, forcecompress and commit=10 
> mount flags and space_cache was v1 by default.
> when I switched to "relatime,compress-force=zlib,space_cache=v2" the 100% cpu 
> has dissapeared, but the shite performance remained.
> 
> 
> As to the chunk size, there is no information in the article about the type 
> of data that was used. While in our case we are pretty certain about the 
> compressed block size (32-128). I am currently inclining towards 32k as it 
> might be ideal in a situation when we have a 5 disk raid5 array.
> 
> In theory
> 1. The minimum compressed write (32k) would fill the chunk on a single disk, 
> thus the IO cost of the operation would be 2 reads (original chunk + original 
> parity)  and 2 writes (new chunk + new parity)
> 
> 2. The maximum compressed write (128k) would require the update of 1 chunk on 
> each of the 4 data disks + 1 parity  write 
> 
> 
> 
> Stefan what mount flags do you use?
> 
> kos
> 
> 
> 
> ----- Original Message -----
> From: "Roman Mamedov" <r...@romanrm.net>
> To: "Konstantin V. Gavrilenko" <k.gavrile...@arhont.com>
> Cc: "Stefan Priebe - Profihost AG" <s.pri...@profihost.ag>, "Marat Khalili" 
> <m...@rqc.ru>, linux-btrfs@vger.kernel.org, "Peter Grandi" 
> <p...@btrfs.list.sabi.co.uk>
> Sent: Wednesday, 16 August, 2017 2:00:03 PM
> Subject: Re: slow btrfs with a single kworker process using 100% CPU
> 
> On Wed, 16 Aug 2017 12:48:42 +0100 (BST)
> "Konstantin V. Gavrilenko" <k.gavrile...@arhont.com> wrote:
> 
>> I believe the chunk size of 512kb is even worth for performance then the 
>> default settings on my HW RAID of  256kb.
> 
> It might be, but that does not explain the original problem reported at all.
> If mdraid performance would be the bottleneck, you would see high iowait,
> possibly some CPU load from the mdX_raidY threads. But not a single Btrfs
> thread pegging into 100% CPU.
> 
>> So now I am moving the data from the array and will be rebuilding it with 64
>> or 32 chunk size and checking the performance.
> 
> 64K is the sweet spot for RAID5/6:
> http://louwrentius.com/linux-raid-level-and-chunk-size-the-benchmarks.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to