The large amount of osdmaps is what i was suspecting. "ceph tell osd.158
status" (or any osd other than 158) would show us how many osdmaps the osds
are currently holding on to.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Mon, Nov 20, 2023 at 6:15 AM Debian <deb...@boku.ac.at> wrote:

> Hi,
>
> yes all of my small osds are affected
>
> i found the issue, my cluster is healthy and my rebalance finished - i
> have only to wait that my old osdmaps get cleaned up.
>
> like in the thread "Disks are filling up even if there is not a single
> placement group on them"
>
> thx!
>
> On 20.11.23 11:36, Eugen Block wrote:
> > You provide only a few details at a time, it would help to get a full
> > picture if you provided the output Wesley asked for (ceph df detail,
> > ceph tell osd.158 status, ceph osd df tree). Is osd.149 now the
> > problematic one or did you just add output from a different osd?
> > It's not really clear what you're doing without the necessary context.
> > You can just add the 'ceph daemon osd.{OSD} perf dump' output here or
> > in some pastebin.
> >
> > Zitat von Debian <deb...@boku.ac.at>:
> >
> >> Hi,
> >>
> >> the block.db size ist default and not custom configured:
> >>
> >> current:
> >>
> >> bluefs.db_used_bytes: 9602859008
> >> bluefs.db_used_bytes: 469434368
> >>
> >> ceph daemon osd.149 config show
> >>
> >>     "bluestore_bitmapallocator_span_size": "1024",
> >>     "bluestore_block_db_size": "0",
> >>     "bluestore_block_size": "107374182400",
> >>     "bluestore_block_wal_size": "100663296",
> >>     "bluestore_cache_size": "0",
> >>     "bluestore_cache_size_hdd": "1073741824",
> >>     "bluestore_cache_size_ssd": "3221225472",
> >>     "bluestore_compression_max_blob_size": "0",
> >>     "bluestore_compression_max_blob_size_hdd": "524288",
> >>     "bluestore_compression_max_blob_size_ssd": "65536",
> >>     "bluestore_compression_min_blob_size": "0",
> >>     "bluestore_compression_min_blob_size_hdd": "131072",
> >>     "bluestore_compression_min_blob_size_ssd": "8192",
> >>     "bluestore_extent_map_inline_shard_prealloc_size": "256",
> >>     "bluestore_extent_map_shard_max_size": "1200",
> >>     "bluestore_extent_map_shard_min_size": "150",
> >>     "bluestore_extent_map_shard_target_size": "500",
> >>     "bluestore_extent_map_shard_target_size_slop": "0.200000",
> >>     "bluestore_max_alloc_size": "0",
> >>     "bluestore_max_blob_size": "0",
> >>     "bluestore_max_blob_size_hdd": "524288",
> >>     "bluestore_max_blob_size_ssd": "65536",
> >>     "bluestore_min_alloc_size": "0",
> >>     "bluestore_min_alloc_size_hdd": "65536",
> >>     "bluestore_min_alloc_size_ssd": "4096",
> >>     "bluestore_prefer_deferred_size": "0",
> >>     "bluestore_prefer_deferred_size_hdd": "32768",
> >>     "bluestore_prefer_deferred_size_ssd": "0",
> >>     "bluestore_rocksdb_options":
> >>
> "compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2",
> >>
> >>     "bluefs_alloc_size": "1048576",
> >>     "bluefs_allocator": "hybrid",
> >>     "bluefs_buffered_io": "false",
> >>     "bluefs_check_for_zeros": "false",
> >>     "bluefs_compact_log_sync": "false",
> >>     "bluefs_log_compact_min_ratio": "5.000000",
> >>     "bluefs_log_compact_min_size": "16777216",
> >>     "bluefs_max_log_runway": "4194304",
> >>     "bluefs_max_prefetch": "1048576",
> >>     "bluefs_min_flush_size": "524288",
> >>     "bluefs_min_log_runway": "1048576",
> >>     "bluefs_preextend_wal_files": "false",
> >>     "bluefs_replay_recovery": "false",
> >>     "bluefs_replay_recovery_disable_compact": "false",
> >>     "bluefs_shared_alloc_size": "65536",
> >>     "bluefs_sync_write": "false",
> >>
> >> which the osd performance counter i cannot determine who is using the
> >> memory,...
> >>
> >> thx & best regards
> >>
> >>
> >> On 18.11.23 09:05, Eugen Block wrote:
> >>> Do you have a large block.db size defined in the ceph.conf (or
> >>> config store)?
> >>>
> >>> Zitat von Debian <deb...@boku.ac.at>:
> >>>
> >>>> thx for your reply, it shows nothing,... there are no pgs on the
> >>>> osd,...
> >>>>
> >>>> best regards
> >>>>
> >>>> On 17.11.23 23:09, Eugen Block wrote:
> >>>>> After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should
> >>>>> show you which PGs are created there and then you’ll know which
> >>>>> pool they belong to, then check again the crush rule for that
> >>>>> pool. You can paste the outputs here.
> >>>>>
> >>>>> Zitat von Debian <deb...@boku.ac.at>:
> >>>>>
> >>>>>> Hi,
> >>>>>>
> >>>>>> after a massive rebalance(tunables) my small SSD-OSDs are getting
> >>>>>> full, i changed my crush rules so there are actual no pgs/pools
> >>>>>> on it, but the disks stay full:
> >>>>>>
> >>>>>> ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
> >>>>>> nautilus (stable)
> >>>>>>
> >>>>>> ID     CLASS WEIGHT     REWEIGHT SIZE    RAW USE DATA OMAP
> >>>>>> META     AVAIL    %USE  VAR  PGS STATUS TYPE NAME
> >>>>>> 158   ssd    0.21999  1.00000 224 GiB 194 GiB 193 GiB 22 MiB 1002
> >>>>>> MiB   30 GiB 86.68 1.49   0     up osd.158
> >>>>>>
> >>>>>> inferring bluefs devices from bluestore path
> >>>>>> 1 : device size 0x37e4400000 : own 0x[1ad3f00000~23c600000] =
> >>>>>> 0x23c600000 : using 0x39630000(918 MiB) : bluestore has
> >>>>>> 0x46e2d0000(18 GiB) available
> >>>>>>
> >>>>>> when i recreate the osd the osd gets full again
> >>>>>>
> >>>>>> any suggestion?
> >>>>>>
> >>>>>> thx & best regards
> >>>>>> _______________________________________________
> >>>>>> ceph-users mailing list -- ceph-users@ceph.io
> >>>>>> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> ceph-users mailing list -- ceph-users@ceph.io
> >>>>> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>>> _______________________________________________
> >>>> ceph-users mailing list -- ceph-users@ceph.io
> >>>> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>>
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@ceph.io
> >>> To unsubscribe send an email to ceph-users-le...@ceph.io
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to