[ceph-users] Re: backfilling kills rbd performance

2022-11-19 Thread Konold, Martin
Hi, On 2022-11-19 17:32, Anthony D'Atri wrote: I’m not positive that the options work with hyphens in them. Try ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 1 --osd_recovery_max_single_start 1 --osd_recovery_op_priority=1' Did so. With Quincy the following

[ceph-users] backfilling kills rbd performance

2022-11-19 Thread Konold, Martin
Hi, on a 3 node hyper converged pve cluster with 12 SSD osd devices I do experience stalls in the rbd performance during normal backfill operations e.g. moving a pool from 2/1 to 3/2. I was expecting that I could control the load caused by the backfilling using ceph tell 'osd.*'

[ceph-users] Re: Issues upgrading cephadm cluster from Octopus.

2022-11-19 Thread Adam King
will also add since it could help resolve this, there is no "mgr/cephadm/registry_json" config option. The whole reason for moving from the previous 3 options to the new json object was actually to move it from config options that can get spit out in logs to the config-key store where it's a bit

[ceph-users] Re: Issues upgrading cephadm cluster from Octopus.

2022-11-19 Thread Adam King
I don't know for sure if it will fix the issue, but the migrations happen based on a config option "mgr/cephadm/migration_current". You could try setting that back to 0 and it would at least trigger the migrations to happen again after restarting/failing over the mgr. They're meant to be