[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-18 Thread Torkil Svensgaard
, 2024 9:46 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: Adding OSD's results in slow ops, inactive PG's I'm glad to hear (or read) that it worked for you as well. :-) Zitat von Torkil Svensgaard : On 18/01/2024 09:30, Eugen Block wrote: Hi, [ceph: root@lazy /]# ceph-conf --show-config

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-18 Thread Frank Schilder
, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Eugen Block Sent: Thursday, January 18, 2024 9:46 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: Adding OSD's results in slow ops, inactive PG's I'm glad to hear (or read) that it worked

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-18 Thread Eugen Block
I'm glad to hear (or read) that it worked for you as well. :-) Zitat von Torkil Svensgaard : On 18/01/2024 09:30, Eugen Block wrote: Hi, [ceph: root@lazy /]# ceph-conf --show-config | egrep osd_max_pg_per_osd_hard_ratio osd_max_pg_per_osd_hard_ratio = 3.00 I don't think this is the

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-18 Thread Torkil Svensgaard
On 18/01/2024 09:30, Eugen Block wrote: Hi, [ceph: root@lazy /]# ceph-conf --show-config | egrep osd_max_pg_per_osd_hard_ratio osd_max_pg_per_osd_hard_ratio = 3.00 I don't think this is the right tool, it says: --show-config-value    Print the corresponding ceph.conf value   

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-18 Thread Eugen Block
Hi, [ceph: root@lazy /]# ceph-conf --show-config | egrep osd_max_pg_per_osd_hard_ratio osd_max_pg_per_osd_hard_ratio = 3.00 I don't think this is the right tool, it says: --show-config-valuePrint the corresponding ceph.conf value that matches

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-17 Thread Torkil Svensgaard
On 18/01/2024 07:48, Eugen Block wrote: Hi,  -3281> 2024-01-17T14:57:54.611+ 7f2c6f7ef540  0 osd.431 2154828 load_pgs opened 750 pgs <--- I'd say that's close enough to what I suspected. ;-) Not sure why the "maybe_wait_for_max_pg" message isn't there but I'd give it a try with a

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-17 Thread Eugen Block
Hi, -3281> 2024-01-17T14:57:54.611+ 7f2c6f7ef540 0 osd.431 2154828 load_pgs opened 750 pgs <--- I'd say that's close enough to what I suspected. ;-) Not sure why the "maybe_wait_for_max_pg" message isn't there but I'd give it a try with a higher osd_max_pg_per_osd_hard_ratio.

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-17 Thread Torkil Svensgaard
On 17-01-2024 22:20, Eugen Block wrote: Hi, Hi this sounds a bit like a customer issue we had almost two years ago. Basically, it was about mon_max_pg_per_osd (default 250) which was exceeded during the first activating OSD (and the last remaining stopping OSD). You can read all the

[ceph-users] Re: Adding OSD's results in slow ops, inactive PG's

2024-01-17 Thread Eugen Block
Hi, this sounds a bit like a customer issue we had almost two years ago. Basically, it was about mon_max_pg_per_osd (default 250) which was exceeded during the first activating OSD (and the last remaining stopping OSD). You can read all the details in the lengthy thread [1]. But if this