[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Wesley Dillingham
The large amount of osdmaps is what i was suspecting. "ceph tell osd.158
status" (or any osd other than 158) would show us how many osdmaps the osds
are currently holding on to.

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Mon, Nov 20, 2023 at 6:15 AM Debian  wrote:

> Hi,
>
> yes all of my small osds are affected
>
> i found the issue, my cluster is healthy and my rebalance finished - i
> have only to wait that my old osdmaps get cleaned up.
>
> like in the thread "Disks are filling up even if there is not a single
> placement group on them"
>
> thx!
>
> On 20.11.23 11:36, Eugen Block wrote:
> > You provide only a few details at a time, it would help to get a full
> > picture if you provided the output Wesley asked for (ceph df detail,
> > ceph tell osd.158 status, ceph osd df tree). Is osd.149 now the
> > problematic one or did you just add output from a different osd?
> > It's not really clear what you're doing without the necessary context.
> > You can just add the 'ceph daemon osd.{OSD} perf dump' output here or
> > in some pastebin.
> >
> > Zitat von Debian :
> >
> >> Hi,
> >>
> >> the block.db size ist default and not custom configured:
> >>
> >> current:
> >>
> >> bluefs.db_used_bytes: 9602859008
> >> bluefs.db_used_bytes: 469434368
> >>
> >> ceph daemon osd.149 config show
> >>
> >> "bluestore_bitmapallocator_span_size": "1024",
> >> "bluestore_block_db_size": "0",
> >> "bluestore_block_size": "107374182400",
> >> "bluestore_block_wal_size": "100663296",
> >> "bluestore_cache_size": "0",
> >> "bluestore_cache_size_hdd": "1073741824",
> >> "bluestore_cache_size_ssd": "3221225472",
> >> "bluestore_compression_max_blob_size": "0",
> >> "bluestore_compression_max_blob_size_hdd": "524288",
> >> "bluestore_compression_max_blob_size_ssd": "65536",
> >> "bluestore_compression_min_blob_size": "0",
> >> "bluestore_compression_min_blob_size_hdd": "131072",
> >> "bluestore_compression_min_blob_size_ssd": "8192",
> >> "bluestore_extent_map_inline_shard_prealloc_size": "256",
> >> "bluestore_extent_map_shard_max_size": "1200",
> >> "bluestore_extent_map_shard_min_size": "150",
> >> "bluestore_extent_map_shard_target_size": "500",
> >> "bluestore_extent_map_shard_target_size_slop": "0.20",
> >> "bluestore_max_alloc_size": "0",
> >> "bluestore_max_blob_size": "0",
> >> "bluestore_max_blob_size_hdd": "524288",
> >> "bluestore_max_blob_size_ssd": "65536",
> >> "bluestore_min_alloc_size": "0",
> >> "bluestore_min_alloc_size_hdd": "65536",
> >> "bluestore_min_alloc_size_ssd": "4096",
> >> "bluestore_prefer_deferred_size": "0",
> >> "bluestore_prefer_deferred_size_hdd": "32768",
> >> "bluestore_prefer_deferred_size_ssd": "0",
> >> "bluestore_rocksdb_options":
> >>
> "compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2",
> >>
> >> "bluefs_alloc_size": "1048576",
> >> "bluefs_allocator": "hybrid",
> >> "bluefs_buffered_io": "false",
> >> "bluefs_check_for_zeros": "false",
> >> "bluefs_compact_log_sync": "false",
> >> "bluefs_log_compact_min_ratio": "5.00",
> >> "bluefs_log_compact_min_size": "16777216",
> >> "bluefs_max_log_runway": "4194304",
> >> "bluefs_max_prefetch": "1048576",
> >> "bluefs_min_flush_size": "524288",
> >> "bluefs_min_log_runway": "1048576",
> >> "bluefs_preextend_wal_files": "false",
> >> "bluefs_replay_recovery": "false",
> >> "bluefs_replay_recovery_disable_compact": "false",
> >> "bluefs_shared_alloc_size": "65536",
> >> "bluefs_sync_write": "false",
> >>
> >> which the osd performance counter i cannot determine who is using the
> >> memory,...
> >>
> >> thx & best regards
> >>
> >>
> >> On 18.11.23 09:05, Eugen Block wrote:
> >>> Do you have a large block.db size defined in the ceph.conf (or
> >>> config store)?
> >>>
> >>> Zitat von Debian :
> >>>
>  thx for your reply, it shows nothing,... there are no pgs on the
>  osd,...
> 
>  best regards
> 
>  On 17.11.23 23:09, Eugen Block wrote:
> > After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should
> > show you which PGs are created there and then you’ll know which
> > pool they belong to, then check again the crush rule for that
> > pool. You can paste the outputs here.
> >
> > Zitat von Debian :
> >
> >> Hi,
> >>
> >> after a massive rebalance(tunables) my small SSD-OSDs are getting
> >> full, i changed my crush rules so there are actual no pgs/pools
> >> on it, but the disks stay full:
> >>
> >> ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
> >> nautilus (stable)
> >>
> >> ID CLASS WEIGHT 

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian

Hi,

yes all of my small osds are affected

i found the issue, my cluster is healthy and my rebalance finished - i 
have only to wait that my old osdmaps get cleaned up.


like in the thread "Disks are filling up even if there is not a single 
placement group on them"


thx!

On 20.11.23 11:36, Eugen Block wrote:
You provide only a few details at a time, it would help to get a full 
picture if you provided the output Wesley asked for (ceph df detail, 
ceph tell osd.158 status, ceph osd df tree). Is osd.149 now the 
problematic one or did you just add output from a different osd?
It's not really clear what you're doing without the necessary context. 
You can just add the 'ceph daemon osd.{OSD} perf dump' output here or 
in some pastebin.


Zitat von Debian :


Hi,

the block.db size ist default and not custom configured:

current:

bluefs.db_used_bytes: 9602859008
bluefs.db_used_bytes: 469434368

ceph daemon osd.149 config show

    "bluestore_bitmapallocator_span_size": "1024",
    "bluestore_block_db_size": "0",
    "bluestore_block_size": "107374182400",
    "bluestore_block_wal_size": "100663296",
    "bluestore_cache_size": "0",
    "bluestore_cache_size_hdd": "1073741824",
    "bluestore_cache_size_ssd": "3221225472",
    "bluestore_compression_max_blob_size": "0",
    "bluestore_compression_max_blob_size_hdd": "524288",
    "bluestore_compression_max_blob_size_ssd": "65536",
    "bluestore_compression_min_blob_size": "0",
    "bluestore_compression_min_blob_size_hdd": "131072",
    "bluestore_compression_min_blob_size_ssd": "8192",
    "bluestore_extent_map_inline_shard_prealloc_size": "256",
    "bluestore_extent_map_shard_max_size": "1200",
    "bluestore_extent_map_shard_min_size": "150",
    "bluestore_extent_map_shard_target_size": "500",
    "bluestore_extent_map_shard_target_size_slop": "0.20",
    "bluestore_max_alloc_size": "0",
    "bluestore_max_blob_size": "0",
    "bluestore_max_blob_size_hdd": "524288",
    "bluestore_max_blob_size_ssd": "65536",
    "bluestore_min_alloc_size": "0",
    "bluestore_min_alloc_size_hdd": "65536",
    "bluestore_min_alloc_size_ssd": "4096",
    "bluestore_prefer_deferred_size": "0",
    "bluestore_prefer_deferred_size_hdd": "32768",
    "bluestore_prefer_deferred_size_ssd": "0",
    "bluestore_rocksdb_options": 
"compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2",


    "bluefs_alloc_size": "1048576",
    "bluefs_allocator": "hybrid",
    "bluefs_buffered_io": "false",
    "bluefs_check_for_zeros": "false",
    "bluefs_compact_log_sync": "false",
    "bluefs_log_compact_min_ratio": "5.00",
    "bluefs_log_compact_min_size": "16777216",
    "bluefs_max_log_runway": "4194304",
    "bluefs_max_prefetch": "1048576",
    "bluefs_min_flush_size": "524288",
    "bluefs_min_log_runway": "1048576",
    "bluefs_preextend_wal_files": "false",
    "bluefs_replay_recovery": "false",
    "bluefs_replay_recovery_disable_compact": "false",
    "bluefs_shared_alloc_size": "65536",
    "bluefs_sync_write": "false",

which the osd performance counter i cannot determine who is using the 
memory,...


thx & best regards


On 18.11.23 09:05, Eugen Block wrote:
Do you have a large block.db size defined in the ceph.conf (or 
config store)?


Zitat von Debian :

thx for your reply, it shows nothing,... there are no pgs on the 
osd,...


best regards

On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should 
show you which PGs are created there and then you’ll know which 
pool they belong to, then check again the crush rule for that 
pool. You can paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting 
full, i changed my crush rules so there are actual no pgs/pools 
on it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) 
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA OMAP 
META AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB 22 MiB 1002 
MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 
0x23c60 : using 0x3963(918 MiB) : bluestore has 
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe 

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian

Hi,

ohh that is exactly my problem, my Cluster is healthy and no rebalance 
active.


I have only to wait that the old osdmaps get cleaned up,...

thx!

On 20.11.23 10:42, Michal Strnad wrote:

Hi.

Try to look on thread "Disks are filling up even if there is not a 
single placement group on them" in this mailing list. Maybe you 
encounter the same problem as me.


Michal



On 11/20/23 08:56, Debian wrote:

Hi,

the block.db size ist default and not custom configured:

current:

bluefs.db_used_bytes: 9602859008
bluefs.db_used_bytes: 469434368

ceph daemon osd.149 config show

 "bluestore_bitmapallocator_span_size": "1024",
 "bluestore_block_db_size": "0",
 "bluestore_block_size": "107374182400",
 "bluestore_block_wal_size": "100663296",
 "bluestore_cache_size": "0",
 "bluestore_cache_size_hdd": "1073741824",
 "bluestore_cache_size_ssd": "3221225472",
 "bluestore_compression_max_blob_size": "0",
 "bluestore_compression_max_blob_size_hdd": "524288",
 "bluestore_compression_max_blob_size_ssd": "65536",
 "bluestore_compression_min_blob_size": "0",
 "bluestore_compression_min_blob_size_hdd": "131072",
 "bluestore_compression_min_blob_size_ssd": "8192",
 "bluestore_extent_map_inline_shard_prealloc_size": "256",
 "bluestore_extent_map_shard_max_size": "1200",
 "bluestore_extent_map_shard_min_size": "150",
 "bluestore_extent_map_shard_target_size": "500",
 "bluestore_extent_map_shard_target_size_slop": "0.20",
 "bluestore_max_alloc_size": "0",
 "bluestore_max_blob_size": "0",
 "bluestore_max_blob_size_hdd": "524288",
 "bluestore_max_blob_size_ssd": "65536",
 "bluestore_min_alloc_size": "0",
 "bluestore_min_alloc_size_hdd": "65536",
 "bluestore_min_alloc_size_ssd": "4096",
 "bluestore_prefer_deferred_size": "0",
 "bluestore_prefer_deferred_size_hdd": "32768",
 "bluestore_prefer_deferred_size_ssd": "0",
 "bluestore_rocksdb_options": 
"compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2",


 "bluefs_alloc_size": "1048576",
 "bluefs_allocator": "hybrid",
 "bluefs_buffered_io": "false",
 "bluefs_check_for_zeros": "false",
 "bluefs_compact_log_sync": "false",
 "bluefs_log_compact_min_ratio": "5.00",
 "bluefs_log_compact_min_size": "16777216",
 "bluefs_max_log_runway": "4194304",
 "bluefs_max_prefetch": "1048576",
 "bluefs_min_flush_size": "524288",
 "bluefs_min_log_runway": "1048576",
 "bluefs_preextend_wal_files": "false",
 "bluefs_replay_recovery": "false",
 "bluefs_replay_recovery_disable_compact": "false",
 "bluefs_shared_alloc_size": "65536",
 "bluefs_sync_write": "false",

which the osd performance counter i cannot determine who is using the 
memory,...


thx & best regards


On 18.11.23 09:05, Eugen Block wrote:
Do you have a large block.db size defined in the ceph.conf (or 
config store)?


Zitat von Debian :

thx for your reply, it shows nothing,... there are no pgs on the 
osd,...


best regards

On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should 
show you which PGs are created there and then you’ll know which 
pool they belong to, then check again the crush rule for that 
pool. You can paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting 
full, i changed my crush rules so there are actual no pgs/pools 
on it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) 
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA OMAP META 
AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB 22 MiB 1002 
MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 
0x23c60 : using 0x3963(918 MiB) : bluestore has 
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an em

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Eugen Block
You provide only a few details at a time, it would help to get a full  
picture if you provided the output Wesley asked for (ceph df detail,  
ceph tell osd.158 status, ceph osd df tree). Is osd.149 now the  
problematic one or did you just add output from a different osd?
It's not really clear what you're doing without the necessary context.  
You can just add the 'ceph daemon osd.{OSD} perf dump' output here or  
in some pastebin.


Zitat von Debian :


Hi,

the block.db size ist default and not custom configured:

current:

bluefs.db_used_bytes: 9602859008
bluefs.db_used_bytes: 469434368

ceph daemon osd.149 config show

    "bluestore_bitmapallocator_span_size": "1024",
    "bluestore_block_db_size": "0",
    "bluestore_block_size": "107374182400",
    "bluestore_block_wal_size": "100663296",
    "bluestore_cache_size": "0",
    "bluestore_cache_size_hdd": "1073741824",
    "bluestore_cache_size_ssd": "3221225472",
    "bluestore_compression_max_blob_size": "0",
    "bluestore_compression_max_blob_size_hdd": "524288",
    "bluestore_compression_max_blob_size_ssd": "65536",
    "bluestore_compression_min_blob_size": "0",
    "bluestore_compression_min_blob_size_hdd": "131072",
    "bluestore_compression_min_blob_size_ssd": "8192",
    "bluestore_extent_map_inline_shard_prealloc_size": "256",
    "bluestore_extent_map_shard_max_size": "1200",
    "bluestore_extent_map_shard_min_size": "150",
    "bluestore_extent_map_shard_target_size": "500",
    "bluestore_extent_map_shard_target_size_slop": "0.20",
    "bluestore_max_alloc_size": "0",
    "bluestore_max_blob_size": "0",
    "bluestore_max_blob_size_hdd": "524288",
    "bluestore_max_blob_size_ssd": "65536",
    "bluestore_min_alloc_size": "0",
    "bluestore_min_alloc_size_hdd": "65536",
    "bluestore_min_alloc_size_ssd": "4096",
    "bluestore_prefer_deferred_size": "0",
    "bluestore_prefer_deferred_size_hdd": "32768",
    "bluestore_prefer_deferred_size_ssd": "0",
    "bluestore_rocksdb_options":  
"compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2",


    "bluefs_alloc_size": "1048576",
    "bluefs_allocator": "hybrid",
    "bluefs_buffered_io": "false",
    "bluefs_check_for_zeros": "false",
    "bluefs_compact_log_sync": "false",
    "bluefs_log_compact_min_ratio": "5.00",
    "bluefs_log_compact_min_size": "16777216",
    "bluefs_max_log_runway": "4194304",
    "bluefs_max_prefetch": "1048576",
    "bluefs_min_flush_size": "524288",
    "bluefs_min_log_runway": "1048576",
    "bluefs_preextend_wal_files": "false",
    "bluefs_replay_recovery": "false",
    "bluefs_replay_recovery_disable_compact": "false",
    "bluefs_shared_alloc_size": "65536",
    "bluefs_sync_write": "false",

which the osd performance counter i cannot determine who is using  
the memory,...


thx & best regards


On 18.11.23 09:05, Eugen Block wrote:
Do you have a large block.db size defined in the ceph.conf (or  
config store)?


Zitat von Debian :


thx for your reply, it shows nothing,... there are no pgs on the osd,...

best regards

On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it  
should show you which PGs are created there and then you’ll know  
which pool they belong to, then check again the crush rule for  
that pool. You can paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are  
getting full, i changed my crush rules so there are actual no  
pgs/pools on it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)  
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA OMAP  
META AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB  
1002 MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] =  
0x23c60 : using 0x3963(918 MiB) : bluestore has  
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsu

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Michal Strnad

Hi.

Try to look on thread "Disks are filling up even if there is not a 
single placement group on them" in this mailing list. Maybe you 
encounter the same problem as me.


Michal



On 11/20/23 08:56, Debian wrote:

Hi,

the block.db size ist default and not custom configured:

current:

bluefs.db_used_bytes: 9602859008
bluefs.db_used_bytes: 469434368

ceph daemon osd.149 config show

     "bluestore_bitmapallocator_span_size": "1024",
     "bluestore_block_db_size": "0",
     "bluestore_block_size": "107374182400",
     "bluestore_block_wal_size": "100663296",
     "bluestore_cache_size": "0",
     "bluestore_cache_size_hdd": "1073741824",
     "bluestore_cache_size_ssd": "3221225472",
     "bluestore_compression_max_blob_size": "0",
     "bluestore_compression_max_blob_size_hdd": "524288",
     "bluestore_compression_max_blob_size_ssd": "65536",
     "bluestore_compression_min_blob_size": "0",
     "bluestore_compression_min_blob_size_hdd": "131072",
     "bluestore_compression_min_blob_size_ssd": "8192",
     "bluestore_extent_map_inline_shard_prealloc_size": "256",
     "bluestore_extent_map_shard_max_size": "1200",
     "bluestore_extent_map_shard_min_size": "150",
     "bluestore_extent_map_shard_target_size": "500",
     "bluestore_extent_map_shard_target_size_slop": "0.20",
     "bluestore_max_alloc_size": "0",
     "bluestore_max_blob_size": "0",
     "bluestore_max_blob_size_hdd": "524288",
     "bluestore_max_blob_size_ssd": "65536",
     "bluestore_min_alloc_size": "0",
     "bluestore_min_alloc_size_hdd": "65536",
     "bluestore_min_alloc_size_ssd": "4096",
     "bluestore_prefer_deferred_size": "0",
     "bluestore_prefer_deferred_size_hdd": "32768",
     "bluestore_prefer_deferred_size_ssd": "0",
     "bluestore_rocksdb_options": 
"compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2",


     "bluefs_alloc_size": "1048576",
     "bluefs_allocator": "hybrid",
     "bluefs_buffered_io": "false",
     "bluefs_check_for_zeros": "false",
     "bluefs_compact_log_sync": "false",
     "bluefs_log_compact_min_ratio": "5.00",
     "bluefs_log_compact_min_size": "16777216",
     "bluefs_max_log_runway": "4194304",
     "bluefs_max_prefetch": "1048576",
     "bluefs_min_flush_size": "524288",
     "bluefs_min_log_runway": "1048576",
     "bluefs_preextend_wal_files": "false",
     "bluefs_replay_recovery": "false",
     "bluefs_replay_recovery_disable_compact": "false",
     "bluefs_shared_alloc_size": "65536",
     "bluefs_sync_write": "false",

which the osd performance counter i cannot determine who is using the 
memory,...


thx & best regards


On 18.11.23 09:05, Eugen Block wrote:
Do you have a large block.db size defined in the ceph.conf (or config 
store)?


Zitat von Debian :


thx for your reply, it shows nothing,... there are no pgs on the osd,...

best regards

On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should 
show you which PGs are created there and then you’ll know which pool 
they belong to, then check again the crush rule for that pool. You 
can paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting 
full, i changed my crush rules so there are actual no pgs/pools on 
it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) 
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA OMAP META 
AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002 
MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 
0x23c60 : using 0x3963(918 MiB) : bluestore has 
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
Michal Strnad
Oddeleni datovych ulozist
CESNET z.s.p.o.


smime.p7s
Description: S/MIME Cryptographic Signature
__

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-19 Thread Debian

Hi,

the block.db size ist default and not custom configured:

current:

bluefs.db_used_bytes: 9602859008
bluefs.db_used_bytes: 469434368

ceph daemon osd.149 config show

    "bluestore_bitmapallocator_span_size": "1024",
    "bluestore_block_db_size": "0",
    "bluestore_block_size": "107374182400",
    "bluestore_block_wal_size": "100663296",
    "bluestore_cache_size": "0",
    "bluestore_cache_size_hdd": "1073741824",
    "bluestore_cache_size_ssd": "3221225472",
    "bluestore_compression_max_blob_size": "0",
    "bluestore_compression_max_blob_size_hdd": "524288",
    "bluestore_compression_max_blob_size_ssd": "65536",
    "bluestore_compression_min_blob_size": "0",
    "bluestore_compression_min_blob_size_hdd": "131072",
    "bluestore_compression_min_blob_size_ssd": "8192",
    "bluestore_extent_map_inline_shard_prealloc_size": "256",
    "bluestore_extent_map_shard_max_size": "1200",
    "bluestore_extent_map_shard_min_size": "150",
    "bluestore_extent_map_shard_target_size": "500",
    "bluestore_extent_map_shard_target_size_slop": "0.20",
    "bluestore_max_alloc_size": "0",
    "bluestore_max_blob_size": "0",
    "bluestore_max_blob_size_hdd": "524288",
    "bluestore_max_blob_size_ssd": "65536",
    "bluestore_min_alloc_size": "0",
    "bluestore_min_alloc_size_hdd": "65536",
    "bluestore_min_alloc_size_ssd": "4096",
    "bluestore_prefer_deferred_size": "0",
    "bluestore_prefer_deferred_size_hdd": "32768",
    "bluestore_prefer_deferred_size_ssd": "0",
    "bluestore_rocksdb_options": 
"compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2",


    "bluefs_alloc_size": "1048576",
    "bluefs_allocator": "hybrid",
    "bluefs_buffered_io": "false",
    "bluefs_check_for_zeros": "false",
    "bluefs_compact_log_sync": "false",
    "bluefs_log_compact_min_ratio": "5.00",
    "bluefs_log_compact_min_size": "16777216",
    "bluefs_max_log_runway": "4194304",
    "bluefs_max_prefetch": "1048576",
    "bluefs_min_flush_size": "524288",
    "bluefs_min_log_runway": "1048576",
    "bluefs_preextend_wal_files": "false",
    "bluefs_replay_recovery": "false",
    "bluefs_replay_recovery_disable_compact": "false",
    "bluefs_shared_alloc_size": "65536",
    "bluefs_sync_write": "false",

which the osd performance counter i cannot determine who is using the 
memory,...


thx & best regards


On 18.11.23 09:05, Eugen Block wrote:
Do you have a large block.db size defined in the ceph.conf (or config 
store)?


Zitat von Debian :


thx for your reply, it shows nothing,... there are no pgs on the osd,...

best regards

On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should 
show you which PGs are created there and then you’ll know which pool 
they belong to, then check again the crush rule for that pool. You 
can paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting 
full, i changed my crush rules so there are actual no pgs/pools on 
it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) 
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA OMAP META 
AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002 
MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 
0x23c60 : using 0x3963(918 MiB) : bluestore has 
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-18 Thread Anthony D'Atri
I was thinking the same thing.  Very small OSDs can behave unexpectedly because 
of the relatively high percentage of overhead.   

> On Nov 18, 2023, at 3:08 AM, Eugen Block  wrote:
> 
> Do you have a large block.db size defined in the ceph.conf (or config store)?
> 
> Zitat von Debian :
> 
>> thx for your reply, it shows nothing,... there are no pgs on the osd,...
>> 
>> best regards
>> 
>>> On 17.11.23 23:09, Eugen Block wrote:
>>> After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should show you 
>>> which PGs are created there and then you’ll know which pool they belong to, 
>>> then check again the crush rule for that pool. You can paste the outputs 
>>> here.
>>> 
>>> Zitat von Debian :
>>> 
 Hi,
 
 after a massive rebalance(tunables) my small SSD-OSDs are getting full, i 
 changed my crush rules so there are actual no pgs/pools on it, but the 
 disks stay full:
 
 ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus 
 (stable)
 
 ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAP META 
 AVAIL%USE  VAR  PGS STATUS TYPE NAME
 158   ssd0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002 MiB   
 30 GiB 86.68 1.49   0 up osd.158
 
 inferring bluefs devices from bluestore path
 1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 0x23c60 
 : using 0x3963(918 MiB) : bluestore has 0x46e2d(18 GiB) available
 
 when i recreate the osd the osd gets full again
 
 any suggestion?
 
 thx & best regards
 ___
 ceph-users mailing list -- ceph-users@ceph.io
 To unsubscribe send an email to ceph-users-le...@ceph.io
>>> 
>>> 
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-18 Thread Eugen Block

Do you have a large block.db size defined in the ceph.conf (or config store)?

Zitat von Debian :


thx for your reply, it shows nothing,... there are no pgs on the osd,...

best regards

On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should  
show you which PGs are created there and then you’ll know which  
pool they belong to, then check again the crush rule for that pool.  
You can paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting  
full, i changed my crush rules so there are actual no pgs/pools on  
it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)  
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA    OMAP  
META AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002  
MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] =  
0x23c60 : using 0x3963(918 MiB) : bluestore has  
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Wesley Dillingham
Please send along a pastebin of "ceph status" and "ceph osd df tree" and
"ceph df detail" also "ceph tell osd.158 status"

Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Fri, Nov 17, 2023 at 6:20 PM Debian  wrote:

> thx for your reply, it shows nothing,... there are no pgs on the osd,...
>
> best regards
>
> On 17.11.23 23:09, Eugen Block wrote:
> > After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should
> > show you which PGs are created there and then you’ll know which pool
> > they belong to, then check again the crush rule for that pool. You can
> > paste the outputs here.
> >
> > Zitat von Debian :
> >
> >> Hi,
> >>
> >> after a massive rebalance(tunables) my small SSD-OSDs are getting
> >> full, i changed my crush rules so there are actual no pgs/pools on
> >> it, but the disks stay full:
> >>
> >> ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)
> >> nautilus (stable)
> >>
> >> ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAP
> >> META AVAIL%USE  VAR  PGS STATUS TYPE NAME
> >> 158   ssd0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002
> >> MiB   30 GiB 86.68 1.49   0 up osd.158
> >>
> >> inferring bluefs devices from bluestore path
> >> 1 : device size 0x37e440 : own 0x[1ad3f0~23c60] =
> >> 0x23c60 : using 0x3963(918 MiB) : bluestore has
> >> 0x46e2d(18 GiB) available
> >>
> >> when i recreate the osd the osd gets full again
> >>
> >> any suggestion?
> >>
> >> thx & best regards
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian

thx for your reply, it shows nothing,... there are no pgs on the osd,...

best regards

On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should 
show you which PGs are created there and then you’ll know which pool 
they belong to, then check again the crush rule for that pool. You can 
paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting 
full, i changed my crush rules so there are actual no pgs/pools on 
it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) 
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA    OMAP 
META AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002 
MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] = 
0x23c60 : using 0x3963(918 MiB) : bluestore has 
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Eugen Block
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should  
show you which PGs are created there and then you’ll know which pool  
they belong to, then check again the crush rule for that pool. You can  
paste the outputs here.


Zitat von Debian :


Hi,

after a massive rebalance(tunables) my small SSD-OSDs are getting  
full, i changed my crush rules so there are actual no pgs/pools on  
it, but the disks stay full:


ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6)  
nautilus (stable)


ID CLASS WEIGHT REWEIGHT SIZE    RAW USE DATA    OMAP  
META AVAIL    %USE  VAR  PGS STATUS TYPE NAME
158   ssd    0.21999  1.0 224 GiB 194 GiB 193 GiB  22 MiB 1002  
MiB   30 GiB 86.68 1.49   0 up osd.158


inferring bluefs devices from bluestore path
1 : device size 0x37e440 : own 0x[1ad3f0~23c60] =  
0x23c60 : using 0x3963(918 MiB) : bluestore has  
0x46e2d(18 GiB) available


when i recreate the osd the osd gets full again

any suggestion?

thx & best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io