[ceph-users] Re: BlueFS spillover warning gone after upgrade to Quincy

2023-01-16 Thread Igor Fedotov
Hi Benoit and Peter, looks like your findings are valid and spillover alert is broken for now.  I've just created https://tracker.ceph.com/issues/58440 to track this. Thanks, Igor On 1/13/2023 9:54 AM, Benoît Knecht wrote: Hi Peter, On Thursday, January 12th, 2023 at 15:12, Peter van Heus

[ceph-users] Re: BlueFS spillover warning gone after upgrade to Quincy

2023-01-12 Thread Benoît Knecht
Hi Peter, On Thursday, January 12th, 2023 at 15:12, Peter van Heusden wrote: > I have a Ceph installation where some of the OSDs were misconfigured to use > 1GB SSD partitions for rocksdb. This caused a spillover ("BlueFS spillover > detected"). I recently upgraded to quincy using cephadm (17.2.

[ceph-users] Re: BlueFS spillover warning gone after upgrade to Quincy

2023-01-12 Thread Peter van Heusden
Thanks. The command definitely shows "slow_bytes": "db_total_bytes": 1073733632, "db_used_bytes": 240123904, "slow_total_bytes": 4000681103360, "slow_used_bytes": 8355381248, So I am not sure why the warnings are no longer appearing. Peter On Thu, 12 Jan 2023 at

[ceph-users] Re: BlueFS spillover warning gone after upgrade to Quincy

2023-01-12 Thread Fox, Kevin M
If you have prometheus enabled, the metrics should be in there I think? Thanks, Kevin From: Peter van Heusden Sent: Thursday, January 12, 2023 6:12 AM To: ceph-users@ceph.io Subject: [ceph-users] BlueFS spillover warning gone after upgrade to Quincy Chec

[ceph-users] Re: BlueFS spillover warning gone after upgrade to Quincy

2023-01-12 Thread Eugen Block
Hi, I usually look for this: [ceph: root@storage01 /]# ceph daemon osd.0 perf dump bluefs | grep -E "db_|slow_" "db_total_bytes": 21470642176, "db_used_bytes": 179699712, "slow_total_bytes": 0, "slow_used_bytes": 0, If you have spillover I would expect the "sl