Thanks. The command definitely shows "slow_bytes":

        "db_total_bytes": 1073733632,
        "db_used_bytes": 240123904,
        "slow_total_bytes": 4000681103360,
        "slow_used_bytes": 8355381248,

So I am not sure why the warnings are no longer appearing.

Peter

On Thu, 12 Jan 2023 at 17:41, Eugen Block <ebl...@nde.ag> wrote:

> Hi,
>
> I usually look for this:
>
> [ceph: root@storage01 /]# ceph daemon osd.0 perf dump bluefs | grep -E
> "db_|slow_"
>          "db_total_bytes": 21470642176,
>          "db_used_bytes": 179699712,
>          "slow_total_bytes": 0,
>          "slow_used_bytes": 0,
>
> If you have spillover I would expect the "slow_bytes" values to be >
> 0. Is it possible that the OSDs were compacted during/after the
> upgrade so the spillover would have been corrected (temporarily)? Do
> you know how much spillover you had before? And how big was the db
> when you had the warnings?
>
> Regards,
> Eugen
>
> Zitat von Peter van Heusden <p...@sanbi.ac.za>:
>
> > Hello everyone
> >
> > I have a Ceph installation where some of the OSDs were misconfigured to
> use
> > 1GB SSD partitions for rocksdb. This caused a spillover ("BlueFS
> *spillover*
> > detected"). I recently upgraded to quincy using cephadm (17.2.5) the
> > spillover warning vanished. This is
> > despite bluestore_warn_on_bluefs_spillover still being set to true.
> >
> > Is there a way to investigate the current state of the DB to see if
> > spillover is, indeed, still happening?
> >
> > Thank you,
> > Peter
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to