[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Wesley Dillingham
The large amount of osdmaps is what i was suspecting. "ceph tell osd.158 status" (or any osd other than 158) would show us how many osdmaps the osds are currently holding on to. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Mon,

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
Hi, yes all of my small osds are affected i found the issue, my cluster is healthy and my rebalance finished - i have only to wait that my old osdmaps get cleaned up. like in the thread "Disks are filling up even if there is not a single placement group on them" thx! On 20.11.23 11:36,

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
Hi, ohh that is exactly my problem, my Cluster is healthy and no rebalance active. I have only to wait that the old osdmaps get cleaned up,... thx! On 20.11.23 10:42, Michal Strnad wrote: Hi. Try to look on thread "Disks are filling up even if there is not a single placement group on

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Eugen Block
You provide only a few details at a time, it would help to get a full picture if you provided the output Wesley asked for (ceph df detail, ceph tell osd.158 status, ceph osd df tree). Is osd.149 now the problematic one or did you just add output from a different osd? It's not really clear

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Michal Strnad
Hi. Try to look on thread "Disks are filling up even if there is not a single placement group on them" in this mailing list. Maybe you encounter the same problem as me. Michal On 11/20/23 08:56, Debian wrote: Hi, the block.db size ist default and not custom configured: current:

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-19 Thread Debian
Hi, the block.db size ist default and not custom configured: current: bluefs.db_used_bytes: 9602859008 bluefs.db_used_bytes: 469434368 ceph daemon osd.149 config show     "bluestore_bitmapallocator_span_size": "1024",     "bluestore_block_db_size": "0",     "bluestore_block_size":

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-18 Thread Anthony D'Atri
I was thinking the same thing. Very small OSDs can behave unexpectedly because of the relatively high percentage of overhead. > On Nov 18, 2023, at 3:08 AM, Eugen Block wrote: > > Do you have a large block.db size defined in the ceph.conf (or config store)? > > Zitat von Debian : > >>

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-18 Thread Eugen Block
Do you have a large block.db size defined in the ceph.conf (or config store)? Zitat von Debian : thx for your reply, it shows nothing,... there are no pgs on the osd,... best regards On 17.11.23 23:09, Eugen Block wrote: After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Wesley Dillingham
Please send along a pastebin of "ceph status" and "ceph osd df tree" and "ceph df detail" also "ceph tell osd.158 status" Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Fri, Nov 17, 2023 at 6:20 PM Debian wrote: > thx for your

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian
thx for your reply, it shows nothing,... there are no pgs on the osd,... best regards On 17.11.23 23:09, Eugen Block wrote: After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should show you which PGs are created there and then you’ll know which pool they belong to, then check again

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Eugen Block
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should show you which PGs are created there and then you’ll know which pool they belong to, then check again the crush rule for that pool. You can paste the outputs here. Zitat von Debian : Hi, after a massive