I was thinking the same thing.  Very small OSDs can behave unexpectedly because 
of the relatively high percentage of overhead.   

> On Nov 18, 2023, at 3:08 AM, Eugen Block <ebl...@nde.ag> wrote:
> 
> Do you have a large block.db size defined in the ceph.conf (or config store)?
> 
> Zitat von Debian <deb...@boku.ac.at>:
> 
>> thx for your reply, it shows nothing,... there are no pgs on the osd,...
>> 
>> best regards
>> 
>>> On 17.11.23 23:09, Eugen Block wrote:
>>> After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should show you 
>>> which PGs are created there and then you’ll know which pool they belong to, 
>>> then check again the crush rule for that pool. You can paste the outputs 
>>> here.
>>> 
>>> Zitat von Debian <deb...@boku.ac.at>:
>>> 
>>>> Hi,
>>>> 
>>>> after a massive rebalance(tunables) my small SSD-OSDs are getting full, i 
>>>> changed my crush rules so there are actual no pgs/pools on it, but the 
>>>> disks stay full:
>>>> 
>>>> ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus 
>>>> (stable)
>>>> 
>>>> ID     CLASS WEIGHT     REWEIGHT SIZE    RAW USE DATA    OMAP META     
>>>> AVAIL    %USE  VAR  PGS STATUS TYPE NAME
>>>> 158   ssd    0.21999  1.00000 224 GiB 194 GiB 193 GiB  22 MiB 1002 MiB   
>>>> 30 GiB 86.68 1.49   0     up             osd.158
>>>> 
>>>> inferring bluefs devices from bluestore path
>>>> 1 : device size 0x37e4400000 : own 0x[1ad3f00000~23c600000] = 0x23c600000 
>>>> : using 0x39630000(918 MiB) : bluestore has 0x46e2d0000(18 GiB) available
>>>> 
>>>> when i recreate the osd the osd gets full again
>>>> 
>>>> any suggestion?
>>>> 
>>>> thx & best regards
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users@ceph.io
>>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>> 
>>> 
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to