Hi Benoit and Peter,
looks like your findings are valid and spillover alert is broken for
now. I've just created https://tracker.ceph.com/issues/58440 to track this.
Thanks,
Igor
On 1/13/2023 9:54 AM, Benoît Knecht wrote:
Hi Peter,
On Thursday, January 12th, 2023 at 15:12, Peter van Heus
Hi Peter,
On Thursday, January 12th, 2023 at 15:12, Peter van Heusden
wrote:
> I have a Ceph installation where some of the OSDs were misconfigured to use
> 1GB SSD partitions for rocksdb. This caused a spillover ("BlueFS spillover
> detected"). I recently upgraded to quincy using cephadm (17.2.
Thanks. The command definitely shows "slow_bytes":
"db_total_bytes": 1073733632,
"db_used_bytes": 240123904,
"slow_total_bytes": 4000681103360,
"slow_used_bytes": 8355381248,
So I am not sure why the warnings are no longer appearing.
Peter
On Thu, 12 Jan 2023 at
If you have prometheus enabled, the metrics should be in there I think?
Thanks,
Kevin
From: Peter van Heusden
Sent: Thursday, January 12, 2023 6:12 AM
To: ceph-users@ceph.io
Subject: [ceph-users] BlueFS spillover warning gone after upgrade to Quincy
Chec
Hi,
I usually look for this:
[ceph: root@storage01 /]# ceph daemon osd.0 perf dump bluefs | grep -E
"db_|slow_"
"db_total_bytes": 21470642176,
"db_used_bytes": 179699712,
"slow_total_bytes": 0,
"slow_used_bytes": 0,
If you have spillover I would expect the "sl