On Sep 4, 2023, at 22:06, Mark Millard wrote:
> On Sep 4, 2023, at 18:39, Mark Millard wrote:
>
>> On Sep 4, 2023, at 10:05, Alexander Motin wrote:
>>
>>> On 04.09.2023 11:45, Mark Millard wrote:
On Sep 4, 2023, at 06:09, Alexander Motin wrote:
> per_txg_dirty_frees_percent is direc
On Sep 4, 2023, at 18:39, Mark Millard wrote:
> On Sep 4, 2023, at 10:05, Alexander Motin wrote:
>
>> On 04.09.2023 11:45, Mark Millard wrote:
>>> On Sep 4, 2023, at 06:09, Alexander Motin wrote:
per_txg_dirty_frees_percent is directly related to the delete delays we
see here. You
On Sep 4, 2023, at 10:05, Alexander Motin wrote:
> On 04.09.2023 11:45, Mark Millard wrote:
>> On Sep 4, 2023, at 06:09, Alexander Motin wrote:
>>> per_txg_dirty_frees_percent is directly related to the delete delays we see
>>> here. You are forcing ZFS to commit transactions each 5% of dir
On 04.09.2023 11:45, Mark Millard wrote:
On Sep 4, 2023, at 06:09, Alexander Motin wrote:
per_txg_dirty_frees_percent is directly related to the delete delays we see
here. You are forcing ZFS to commit transactions each 5% of dirty ARC limit,
which is 5% of 10% or memory size. I haven't loo
On Sep 4, 2023, at 06:09, Alexander Motin wrote:
> On 04.09.2023 05:56, Mark Millard wrote:
>> On Sep 4, 2023, at 02:00, Mark Millard wrote:
>>> On Sep 3, 2023, at 23:35, Mark Millard wrote:
On Sep 3, 2023, at 22:06, Alexander Motin wrote:
> On 03.09.2023 22:54, Mark Millard wrote:
>>
On 04.09.2023 05:56, Mark Millard wrote:
On Sep 4, 2023, at 02:00, Mark Millard wrote:
On Sep 3, 2023, at 23:35, Mark Millard wrote:
On Sep 3, 2023, at 22:06, Alexander Motin wrote:
On 03.09.2023 22:54, Mark Millard wrote:
After that ^t produced the likes of:
load: 6.39 cmd: sh 4849 [tx->
On Sep 4, 2023, at 02:00, Mark Millard wrote:
> On Sep 3, 2023, at 23:35, Mark Millard wrote:
>
>> On Sep 3, 2023, at 22:06, Alexander Motin wrote:
>>
>>>
>>> On 03.09.2023 22:54, Mark Millard wrote:
After that ^t produced the likes of:
load: 6.39 cmd: sh 4849 [tx->tx_quiesce_done
On Sep 3, 2023, at 23:35, Mark Millard wrote:
> On Sep 3, 2023, at 22:06, Alexander Motin wrote:
>
>>
>> On 03.09.2023 22:54, Mark Millard wrote:
>>> After that ^t produced the likes of:
>>> load: 6.39 cmd: sh 4849 [tx->tx_quiesce_done_cv] 10047.33r 0.51u 121.32s
>>> 1% 13004k
>>
>> So the
On Sep 3, 2023, at 19:54, Mark Millard wrote:
> ThreadRipper 1950X (32 hardware threads) doing bulk -J128
> with USE_TMPFS=no , no ALLOW_MAKE_JOBS , no
> ALLOW_MAKE_JOBS_PACKAGES , USB3 NVMe SSD storage/ZFS-boot-media,
> debug system build in use :
>
> [00:03:44] Building 34214 packages using up
On Sep 3, 2023, at 22:06, Alexander Motin wrote:
>
> On 03.09.2023 22:54, Mark Millard wrote:
>> After that ^t produced the likes of:
>> load: 6.39 cmd: sh 4849 [tx->tx_quiesce_done_cv] 10047.33r 0.51u 121.32s 1%
>> 13004k
>
> So the full state is not "tx->tx", but is actually a
> "tx->tx_qu
Mark,
On 03.09.2023 22:54, Mark Millard wrote:
After that ^t produced the likes of:
load: 6.39 cmd: sh 4849 [tx->tx_quiesce_done_cv] 10047.33r 0.51u 121.32s 1%
13004k
So the full state is not "tx->tx", but is actually a
"tx->tx_quiesce_done_cv", which means the thread is waiting for new
t
ThreadRipper 1950X (32 hardware threads) doing bulk -J128
with USE_TMPFS=no , no ALLOW_MAKE_JOBS , no
ALLOW_MAKE_JOBS_PACKAGES , USB3 NVMe SSD storage/ZFS-boot-media,
debug system build in use :
[00:03:44] Building 34214 packages using up to 128 builders
[00:03:44] Hit CTRL+t at any time to see bu
12 matches
Mail list logo