Hi Igor

the reason why i tested differekt rocksdb options is that was having a
really bad write performance with the default settings (30-60mb/s) on the
cluster...

actually i have 200mb/s read and 180mb/s write performance

now i dont now which of the booth settings are the good ones

Another question:
which of the booth can you recommend?

https://gist.github.com/likid0/1b52631ff5d0d649a22a3f30106ccea7
bluestore rocksdb options =
compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_bytes_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB

https://yourcmc.ru/wiki/Ceph_performance
bluestore_rocksdb_options =
compression=kNoCompression,max_write_buffer_number=64,min_write_buffer_number_to_merge=32,recycle_log_file_num=64,compaction_style=kCompactionStyleLevel,write_buffer_size=4MB,target_file_size_base=4MB,max_background_compactions=64,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=32,flusher_threads=8,compaction_readahead_size=2MB



Am Di., 12. Okt. 2021 um 12:35 Uhr schrieb José H. Freidhof <
harald.freid...@googlemail.com>:

> Hi Igor,
>
> Thx for checking the logs.. but what the hell is going on here? :-)
> Yes its true i tested the and created the osd´s with three
> different rockdb options.
> I can not understand why the osd dont have the same rockdb option, because
> i have created ALL OSDs new after set and test those settings.
>
> Maybe i do something wrong with the re-deployment of the osds?
> What i do:
> ceph osd out osd.x
> ceph osd down osd.x
> systemctl stop ceph-osd@x
> ceph osd rm osd.x
> ceph osd crush rm osd.x
> ceph auth del osd.x
> ceph-volume lvm zap --destroy /dev/ceph-block-0/block-0 (lvm hdd partition)
> ceph-volume lvm zap --destroy /dev/ceph-db-0/db-0 (lvm ssd partition)
> ceph-volume lvm zap --destroy /dev/ceph-wal-0/wal-db-0 (lvm nvme
> partition)
> ...
>
> Later i recreate the osds with:
> cephadm shell -m /var/lib/ceph
> ceph auth export client.bootstrap-osd
> vi /var/lib/ceph/bootstrap-osd/ceph.keyring
> ceph-volume lvm prepare --no-systemd --bluestore --data
> ceph-block-4/block-4 --block.wal ceph-wal-0/waldb-4 --block.db
> ceph-db-0/db-4
> cp -r /var/lib/ceph/osd /mnt/ceph/
> Exit the shell in the container.
> cephadm --image ceph/ceph:v16.2.5 adopt --style legacy --name osd.X
> systemctl start ceph-462c44b4-eed6-11eb-8b2c-a1ad45f88...@osd.xx.service
>
>
> Igor one question:
> is there actually an easier way to recreate the osd? maybe over the
> dashboard?
> can you recommend something?
>
> i have no problem to create the osd on the nodes again, but i need to be
> sure that no old setting stays on the osd.
>
>
>
> Am Di., 12. Okt. 2021 um 12:03 Uhr schrieb Igor Fedotov <
> igor.fedo...@croit.io>:
>
>> Hey Jose,
>>
>> your rocksdb settings are still different from the default ones.
>>
>> These are options you shared originally:
>>
>>
>> compression=kNoCompression,max_write_buffer_number=64,min_write_buffer_number_to_merge=32,recycle_log_file_num=64,compaction_style=kCompactionStyleLevel,write_buffer_size=4MB,target_file_size_base=4MB,max_background_compactions=64,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=32,flusher_threads=8,compaction_readahead_size=2MB
>>
>> These are ones I could find in  osd.5 startup log, note e.g.
>> max_write_buffer_number:
>>
>> Oct 12 09:09:30 cd88-ceph-osdh-01 bash[1572206]: debug
>> 2021-10-12T07:09:30.686+0000 7f16d24a0080  1
>> bluestore(/var/lib/ceph/osd/ceph-5) _open_db opened rocksdb path db options
>> compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_bytes_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB
>>
>> And here are the ones I'd expect as defaults - again please note
>> max_write_buffer_number:
>>
>>
>> compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2,max_total_wal_size=1073741824
>>
>>
>> And here is the source code for v16.2.5 where the expected default line
>> comes from:
>>
>>
>> https://github.com/ceph/ceph/blob/0883bdea7337b95e4b611c768c0279868462204a/src/common/options.cc#L4644
>>
>>
>> Not that I'm absolutely sure this is the actual  root cause but I'd
>> suggest to revert back to the baseline prior to proceeding with the
>> troubleshooting...
>>
>> So please adjust properly and restart OSDs!!! Hopefully it wouldn't need
>> a redeployment...
>>
>>
>> As for https://tracker.ceph.com/issues/50656 - it's irrelevant to your
>> case. It was unexpected ENOSPC result from an allocator which still had
>> enough free space. But in your case bluefs allocator doesn't have free
>> space at all as the latter is totally wasted by tons of WAL files.
>>
>>
>> Thanks,
>>
>> Igor
>>
>>
>>
>> On 10/12/2021 10:51 AM, José H. Freidhof wrote:
>>
>> Hello Igor
>>
>> "Does single OSD startup (after if's experiencing "unable to allocate)
>> takes 20 mins as well?"
>> A: YES
>>
>> Here the example log of the startup and recovery of a problematic osd.
>> https://paste.ubuntu.com/p/2WVJbg7cBy/
>>
>> Here the example log of a problematic osd
>> https://paste.ubuntu.com/p/qbB6y7663f/
>>
>> I found this post about a similar error and a bug in 16.2.4... we are
>> running 16.2.5...maybe the bug is not really fixed???
>> https://tracker.ceph.com/issues/50656
>> https://forum.proxmox.com/threads/ceph-16-2-pacific-cluster-crash.92367/
>>
>>
>>
>> Am Mo., 11. Okt. 2021 um 11:53 Uhr schrieb Igor Fedotov <
>> igor.fedo...@croit.io>:
>>
>>> hmm... so it looks like RocksDB still doesn't perform WAL cleanup during
>>> regular operation but applies it on OSD startup....
>>>
>>> Does single OSD startup (after if's experiencing "unable to allocate)
>>> takes 20 mins as well?
>>>
>>> Could you please share OSD log containing both that long startup and
>>> following (e.g. 1+ hour) regular operation?
>>>
>>> Preferable for OSD.2  (or whatever one which has been using default
>>> settings from the deployment).
>>>
>>>
>>> Thanks,
>>>
>>> Igor
>>>
>>>
>>> On 10/9/2021 12:18 AM, José H. Freidhof wrote:
>>>
>>> Hi Igor,
>>>
>>> "And was osd.2 redeployed AFTER settings had been reset to defaults ?"
>>> A: YES
>>>
>>> "Anything particular about current cluster use cases?"
>>> A: we are using it temporary as a iscsi target for a vmware esxi cluster
>>> with 6 hosts. We created two 10tb iscsi images/luns for vmware, because the
>>> other datastore are at 90%.
>>> We plan in the future, after ceph is working right, and stable to
>>> install openstack and kvm and we want to convert all vms into rbd images.
>>> Like i told you is a three osd nodes cluster with 32 cores and 256gb ram
>>> and two 10g bond network cards on a 10g network
>>>
>>> "E.g. is it a sort of regular usage (with load flukes and peak) or may
>>> be some permanently running stress load testing. The latter might tend to
>>> hold the resources and e.g. prevent from internal house keeping...
>>> A: Its a SAN for vmware and there are running 43 VMs at the moment... at
>>> the daytime is more stress on the disks because the people are working and
>>> in the afternoon the iops goes down because the users are at home
>>> noting speculative...
>>>
>>> There is something else that i noticed... if i reboot one osd with
>>> 20osds then it takes 20min to come up... if i tail the logs of the osd i
>>> can see a lot of " recovery log mode 2" on all osd
>>> after the 20min the osd comes one after one up and the waldb are small
>>> and no error in the logs about bluefs _allocate unable to allocate...
>>>
>>> it seems that the problem is rocking up after a longer time (12h)
>>>
>>>
>>> Am Fr., 8. Okt. 2021 um 15:24 Uhr schrieb Igor Fedotov <
>>> igor.fedo...@croit.io>:
>>>
>>>> And was osd.2 redeployed AFTER settings had been reset to defaults ?
>>>>
>>>> Anything particular about current cluster use cases?
>>>>
>>>> E.g. is it a sort of regular usage (with load flukes and peak) or may
>>>> be some permanently running stress load testing. The latter might tend to
>>>> hold the resources and e.g. prevent from internal house keeping...
>>>>
>>>> Igor
>>>>
>>>>
>>>> On 10/8/2021 12:16 AM, José H. Freidhof wrote:
>>>>
>>>> Hi Igor,
>>>>
>>>> yes the same problem is on osd.2
>>>>
>>>> we have 3 OSD Nodes... Each Node has 20 Bluestore OSDs ... in total we
>>>> have 60 OSDs
>>>> i checked right now one node... and 15 of 20 OSDs have this problem and
>>>> error in the log.
>>>>
>>>> the settings that you have complained some emails ago .. i have
>>>> reverted them to default.
>>>>
>>>> ceph.conf file:
>>>>
>>>> [global]
>>>>         fsid = 462c44b4-eed6-11eb-8b2c-a1ad45f88a97
>>>>         mon_host = [v2:10.50.50.21:3300/0,v1:10.50.50.21:6789/0] [v2:
>>>> 10.50.50.22:3300/0,v1:10.50.50.22:6789/0] [v2:
>>>> 10.50.50.20:3300/0,v1:10.50.50.20:6789/0]
>>>>         log file = /var/log/ceph/$cluster-$type-$id.log
>>>>         max open files = 131072
>>>>         mon compact on trim = False
>>>>         osd deep scrub interval = 137438953472
>>>>         osd max scrubs = 16
>>>>         osd objectstore = bluestore
>>>>         osd op threads = 2
>>>>         osd scrub load threshold = 0.01
>>>>         osd scrub max interval = 137438953472
>>>>         osd scrub min interval = 137438953472
>>>>         perf = True
>>>>         rbd readahead disable after bytes = 0
>>>>         rbd readahead max bytes = 4194304
>>>>         throttler perf counter = False
>>>>
>>>> [client]
>>>>         rbd cache = False
>>>>
>>>>
>>>> [mon]
>>>>         mon health preluminous compat = True
>>>>         mon osd down out interval = 300
>>>>
>>>> [osd]
>>>>         bluestore cache autotune = 0
>>>>         bluestore cache kv ratio = 0.2
>>>>         bluestore cache meta ratio = 0.8
>>>>         bluestore extent map shard max size = 200
>>>>         bluestore extent map shard min size = 50
>>>>         bluestore extent map shard target size = 100
>>>>         bluestore rocksdb options =
>>>> compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_bytes_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB
>>>>         osd map share max epochs = 100
>>>>         osd max backfills = 5
>>>>         osd op num shards = 8
>>>>         osd op num threads per shard = 2
>>>>         osd min pg log entries = 10
>>>>         osd max pg log entries = 10
>>>>         osd pg log dups tracked = 10
>>>>         osd pg log trim min = 10
>>>>
>>>>
>>>>
>>>> root@cd133-ceph-osdh-01:~# ceph config dump
>>>> WHO                                               MASK
>>>>      LEVEL     OPTION                                       VALUE
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        RO
>>>> global
>>>>      basic     container_image
>>>> docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>> global
>>>>      advanced  leveldb_max_open_files                       131072
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  mon_compact_on_trim                          false
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      dev       ms_crc_data                                  false
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  osd_deep_scrub_interval                      1209600.000000
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  osd_max_scrubs                               16
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  osd_scrub_load_threshold                     0.010000
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  osd_scrub_max_interval                       1209600.000000
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  osd_scrub_min_interval                       86400.000000
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  perf                                         true
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  rbd_readahead_disable_after_bytes            0
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  rbd_readahead_max_bytes                      4194304
>>>>
>>>>
>>>>
>>>>
>>>> global
>>>>      advanced  throttler_perf_counter                       false
>>>>
>>>>
>>>>
>>>>
>>>>   mon
>>>>     advanced  auth_allow_insecure_global_id_reclaim        false
>>>>
>>>>
>>>>
>>>>
>>>>   mon
>>>>     advanced  cluster_network
>>>> 10.50.50.0/24
>>>>
>>>>
>>>>
>>>>
>>>>                                                              *
>>>>   mon
>>>>     advanced  mon_osd_down_out_interval                    300
>>>>
>>>>
>>>>
>>>>
>>>>   mon
>>>>     advanced  public_network
>>>> 10.50.50.0/24
>>>>
>>>>
>>>>
>>>>
>>>>                                                              *
>>>>   mgr
>>>>     advanced  mgr/cephadm/container_init                   True
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/cephadm/device_enhanced_scan             true
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/cephadm/migration_current                2
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   mgr
>>>>     advanced  mgr/cephadm/warn_on_stray_daemons            false
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   mgr
>>>>     advanced  mgr/cephadm/warn_on_stray_hosts              false
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   mgr
>>>>     advanced  mgr/dashboard/10.50.50.21/server_addr
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                             *
>>>>
>>>>
>>>>
>>>>                               *
>>>>   mgr
>>>>     advanced  mgr/dashboard/camdatadash/ssl_server_port    8443
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/dashboard/cd133-ceph-mon-01/server_addr
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/dashboard/dasboard/server_port           80
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/dashboard/dashboard/server_addr          10.251.133.161
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/dashboard/dashboard/ssl_server_port      8443
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/dashboard/server_addr                    0.0.0.0
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   mgr
>>>>     advanced  mgr/dashboard/server_port                    8080
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/dashboard/ssl                            false
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   mgr
>>>>     advanced  mgr/dashboard/ssl_server_port                8443
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/orchestrator/orchestrator                cephadm
>>>>
>>>>
>>>>
>>>>
>>>>   mgr
>>>>     advanced  mgr/prometheus/server_addr                   0.0.0.0
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   mgr
>>>>     advanced  mgr/telemetry/channel_ident                  true
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/telemetry/enabled                        true
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                       *
>>>>   mgr
>>>>     advanced  mgr/telemetry/last_opt_revision              3
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   osd
>>>>     dev       bluestore_cache_autotune                     true
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       bluestore_cache_kv_ratio                     0.200000
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       bluestore_cache_meta_ratio                   0.800000
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       bluestore_cache_size                         2147483648
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       bluestore_cache_size_hdd                     2147483648
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       bluestore_extent_map_shard_max_size          200
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       bluestore_extent_map_shard_min_size          50
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       bluestore_extent_map_shard_target_size       100
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  bluestore_rocksdb_options
>>>>  
>>>> compression=kNoCompression,max_write_buffer_number=64,min_write_buffer_number_to_merge=32,recycle_log_file_num=64,compaction_style=kCompactionStyleLevel,write_buffer_size=4MB,target_file_size_base=4MB,max_background_compactions=64,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=32,flusher_threads=8,compaction_readahead_size=2MB
>>>>  *
>>>>   osd
>>>>     advanced  mon_osd_cache_size                           1024
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       ms_crc_data                                  false
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  osd_map_share_max_epochs                     5
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  osd_max_backfills                            1
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       osd_max_pg_log_entries                       10
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       osd_memory_cache_min                         3000000000
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>> host:cd133-ceph-osdh-01   basic     osd_memory_target
>>>>      5797322096
>>>>
>>>>
>>>>
>>>>   osd
>>>> host:cd133k-ceph-osdh-01  basic     osd_memory_target
>>>>      9402402385
>>>>
>>>>
>>>>
>>>>   osd
>>>> host:cd88-ceph-osdh-01    basic     osd_memory_target
>>>>      5797322096
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  osd_memory_target_autotune                   true
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       osd_min_pg_log_entries                       10
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  osd_op_num_shards                            8
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   osd
>>>>     advanced  osd_op_num_threads_per_shard                 2
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                                        *
>>>>   osd
>>>>     dev       osd_pg_log_dups_tracked                      10
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     dev       osd_pg_log_trim_min                          10
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  osd_recovery_max_active                      3
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  osd_recovery_max_single_start                1
>>>>
>>>>
>>>>
>>>>
>>>>   osd
>>>>     advanced  osd_recovery_sleep                           0.000000
>>>>
>>>>
>>>>
>>>>
>>>>   client
>>>>      advanced  rbd_cache                                    false
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Am Do., 7. Okt. 2021 um 19:27 Uhr schrieb Igor Fedotov <
>>>> igor.fedo...@croit.io>:
>>>>
>>>>> And does redeployed osd.2 expose the same issue (or at least DB/WAL
>>>>> disbalance) again? Were settings reverted to defaults for it as well?
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> Igor
>>>>> On 10/7/2021 12:46 PM, José H. Freidhof wrote:
>>>>>
>>>>> Good morning,
>>>>>
>>>>> i checked today the osd.8 and the log shows again the same error
>>>>> bluefs _allocate unable to allocate 0x100000 on bdev 0, allocator name
>>>>> bluefs-wal, allocator type hybrid, capacity 0xb40000000, block size
>>>>> 0x100000, free 0xff000, fragmentation 0, allocated 0x0
>>>>>
>>>>> any idea why that could be?
>>>>>
>>>>> Am Mi., 6. Okt. 2021 um 22:23 Uhr schrieb José H. Freidhof <
>>>>> harald.freid...@googlemail.com>:
>>>>>
>>>>>> Hi Igor,
>>>>>>
>>>>>> today i repaired one osd node and all osd´s on the node, creating
>>>>>> them new again....
>>>>>> after that i waited for the rebalance/recovery process and the
>>>>>> cluster was healthy after some hours..
>>>>>>
>>>>>> i notices that the osd.2 does not have any more this error in the log.
>>>>>> but i noticed it now on the same node on osd.8... so i did the test
>>>>>> that you suggested on osd.8
>>>>>>
>>>>>> it took nearly 20minutes to compact those db´s on the bluestore, but
>>>>>> it helped... the problem on osd.8 is gone...
>>>>>>
>>>>>>
>>>>>> *so the problem that i have with the alloc on the wal device seems to
>>>>>> be random on different nodes and osd´s and looks like it comes, stay
>>>>>> a while and disappears after a longer while... *
>>>>>>
>>>>>> here are the results that you suggested:
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.8 bluestore bluefs device
>>>>>> info
>>>>>> {
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_WAL",
>>>>>>         "total": 48318377984,
>>>>>>         "free": 1044480,
>>>>>>         "bluefs_used": 48317333504
>>>>>>     },
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_DB",
>>>>>>         "total": 187904811008,
>>>>>>         "free": 79842762752,
>>>>>>         "bluefs_used": 108062048256
>>>>>>     },
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_SLOW",
>>>>>>         "total": 6001172414464,
>>>>>>         "free": 5510727389184,
>>>>>>         "bluefs_used": 0,
>>>>>>         "bluefs max available": 5508815847424
>>>>>>     }
>>>>>> }
>>>>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.8 bluefs stats
>>>>>> 0 : device size 0xb3ffff000 : using 0xb3ff00000(45 GiB)
>>>>>> 1 : device size 0x2bbfffe000 : using 0x1931500000(101 GiB)
>>>>>> 2 : device size 0x57541c00000 : using 0x7235e3e000(457 GiB)
>>>>>> RocksDBBlueFSVolumeSelector: wal_total:45902462976,
>>>>>> db_total:178509578240, slow_total:5701113793740, db_avail:103884521472
>>>>>> Usage matrix:
>>>>>> DEV/LEV     WAL         DB          SLOW        *           *
>>>>>>   REAL        FILES
>>>>>> LOG         304 MiB     7.9 GiB     0 B         0 B         0 B
>>>>>>   9.7 MiB     1
>>>>>> WAL         45 GiB      100 GiB     0 B         0 B         0 B
>>>>>>   144 GiB     2319
>>>>>> DB          0 B         276 MiB     0 B         0 B         0 B
>>>>>>   249 MiB     47
>>>>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>   0 B         0
>>>>>> TOTALS      45 GiB      109 GiB     0 B         0 B         0 B
>>>>>>   0 B         2367
>>>>>> MAXIMUMS:
>>>>>> LOG         304 MiB     7.9 GiB     0 B         0 B         0 B
>>>>>>   20 MiB
>>>>>> WAL         45 GiB      149 GiB     0 B         0 B         0 B
>>>>>>   192 GiB
>>>>>> DB          0 B         762 MiB     0 B         0 B         0 B
>>>>>>   738 MiB
>>>>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>   0 B
>>>>>> TOTALS      45 GiB      150 GiB     0 B         0 B         0 B
>>>>>>   0 B
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.464+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x400000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.472+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.480+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.500+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.576+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.624+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.636+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.884+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.968+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.992+0000 7f4a9483a700  4 rocksdb:
>>>>>> [db_impl/db_impl_write.cc:1668] [L] New memtable created with log file:
>>>>>> #13656. Immutable memtables: 1.
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.992+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.992+0000 7f4a9483a700  1 bluefs _allocate unable to
>>>>>> allocate 0x100000 on bdev 0, allocator name bluefs-wal, allocator type
>>>>>> hybrid, capacity 0xb40000000, block size 0x100000, free 0xff000,
>>>>>> fragmentation 0, allocated 0x0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.996+0000 7f4aab067700  4 rocksdb: (Original Log Time
>>>>>> 2021/10/06-19:51:34.996331) [db_impl/db_impl_compaction_flush.cc:2198]
>>>>>> Calling FlushMemTableToOutputFile with column family [L], flush slots
>>>>>> available 1, compaction slots available 1, flush slots scheduled 1,
>>>>>> compaction slots scheduled 0
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.996+0000 7f4aab067700  4 rocksdb: [flush_job.cc:321]
>>>>>> [L] [JOB 8859] Flushing memtable with next log file: 13655
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.996+0000 7f4aab067700  4 rocksdb: [flush_job.cc:321]
>>>>>> [L] [JOB 8859] Flushing memtable with next log file: 13656
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.996+0000 7f4aab067700  4 rocksdb: EVENT_LOG_v1
>>>>>> {"time_micros": 1633549894998273, "job": 8859, "event": "flush_started",
>>>>>> "num_memtables": 2, "num_entries": 3662, "num_deletes": 0,
>>>>>> "total_data_size": 130482337, "memory_usage": 132976224, "flush_reason":
>>>>>> "Write Buffer Full"}
>>>>>> Oct 06 21:51:34 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:34.996+0000 7f4aab067700  4 rocksdb: [flush_job.cc:350]
>>>>>> [L] [JOB 8859] Level-0 flush table #13657: started
>>>>>> Oct 06 21:51:35 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:35.004+0000 7f4aab067700  4 rocksdb: EVENT_LOG_v1
>>>>>> {"time_micros": 1633549895008271, "cf_name": "L", "job": 8859, "event":
>>>>>> "table_file_creation", "file_number": 13657, "file_size": 2952537,
>>>>>> "table_properties": {"data_size": 2951222, "index_size": 267,
>>>>>> "index_partitions": 0, "top_level_index_size": 0, 
>>>>>> "index_key_is_user_key":
>>>>>> 0, "index_value_is_delta_encoded": 0, "filter_size": 197, "raw_key_size":
>>>>>> 1120, "raw_average_key_size": 16, "raw_value_size": 2950151,
>>>>>> "raw_average_value_size": 42145, "num_data_blocks": 9, "num_entries": 70,
>>>>>> "num_deletions": 61, "num_merge_operands": 0, "num_range_deletions": 0,
>>>>>> "format_version": 0, "fixed_key_len": 0, "filter_policy":
>>>>>> "rocksdb.BuiltinBloomFilter", "column_family_name": "L",
>>>>>> "column_family_id": 10, "comparator": "leveldb.BytewiseComparator",
>>>>>> "merge_operator": "nullptr", "prefix_extractor_name": "nullptr",
>>>>>> "property_collectors": "[]", "compression": "NoCompression",
>>>>>> "compression_options": "window_bits=-14; level=32767; strategy=0;
>>>>>> max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time":
>>>>>> 1633549854, "oldest_key_time": 1633549854, "file_creation_time":
>>>>>> 1633549894}}
>>>>>> Oct 06 21:51:35 cd88-ceph-osdh-01 bash[6328]: debug
>>>>>> 2021-10-06T19:51:35.004+0000 7f4aab067700  4 rocksdb: [flush_job.cc:401]
>>>>>> [L] [JOB 8859] Level-0 flush table #13657: 2952537 bytes OK
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:~# ceph osd set noout
>>>>>> root@cd88-ceph-osdh-01:~# ceph orch daemon stop osd.8
>>>>>> root@cd88-ceph-osdh-01:~# ceph orch ps
>>>>>> ...
>>>>>> osd.7                                  cd133-ceph-osdh-01
>>>>>>    running (4h)     44s ago    -    2738M    5528M  16.2.5     
>>>>>> 6933c2a0b7dd
>>>>>>  8a98ae61f0eb
>>>>>> osd.8                                  cd88-ceph-osdh-01
>>>>>>     stopped           5s ago    -        -    5528M  <unknown>  <unknown>
>>>>>>   <unknown>
>>>>>> osd.9                                  cd133k-ceph-osdh-01
>>>>>>     running (3d)      5m ago    -    4673M    8966M  16.2.5
>>>>>> 6933c2a0b7dd  0ff7584b1808
>>>>>> ...
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:~# ceph-kvstore-tool bluestore-kv
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/ compact
>>>>>> 2021-10-06T21:53:50.559+0200 7f87bde3c240  0
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_db_and_around read-only:0 repair:0
>>>>>> 2021-10-06T21:53:50.559+0200 7f87bde3c240  1 bdev(0x5644f056c800
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) open 
>>>>>> path
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bdev(0x5644f056c800
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) open 
>>>>>> size
>>>>>> 6001172414464 (0x57541c00000, 5.5 TiB) block_size 4096 (4 KiB) rotational
>>>>>> discard not supported
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bdev(0x5644f056cc00
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db) open
>>>>>> path /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bdev(0x5644f056cc00
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db) open
>>>>>> size 187904819200 (0x2bc0000000, 175 GiB) block_size 4096 (4 KiB)
>>>>>> rotational discard not supported
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bluefs add_block_device
>>>>>> bdev 1 path
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db size 
>>>>>> 175
>>>>>> GiB
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bdev(0x5644f056d000
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) open 
>>>>>> path
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bdev(0x5644f056d000
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) open 
>>>>>> size
>>>>>> 6001172414464 (0x57541c00000, 5.5 TiB) block_size 4096 (4 KiB) rotational
>>>>>> discard not supported
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bluefs add_block_device
>>>>>> bdev 2 path 
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block
>>>>>> size 5.5 TiB
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bdev(0x5644f056d400
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal) open
>>>>>> path /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bdev(0x5644f056d400
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal) open
>>>>>> size 48318382080 (0xb40000000, 45 GiB) block_size 4096 (4 KiB)
>>>>>> non-rotational discard supported
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bluefs add_block_device
>>>>>> bdev 0 path
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal size 
>>>>>> 45
>>>>>> GiB
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bluefs mount
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bluefs _init_alloc new,
>>>>>> id 0, allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>> 0xb40000000, block size 0x100000
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bluefs _init_alloc new,
>>>>>> id 1, allocator name bluefs-db, allocator type hybrid, capacity
>>>>>> 0x2bc0000000, block size 0x100000
>>>>>> 2021-10-06T21:53:50.563+0200 7f87bde3c240  1 bluefs _init_alloc
>>>>>> shared, id 2, capacity 0x57541c00000, block size 0x10000
>>>>>> 2021-10-06T21:53:50.655+0200 7f87bde3c240  1 bluefs mount
>>>>>> shared_bdev_used = 0
>>>>>> 2021-10-06T21:53:50.655+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _prepare_db_environment set db_paths to db,178509578240
>>>>>> db.slow,5701113793740
>>>>>> 2021-10-06T22:01:32.715+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_db opened rocksdb path db options
>>>>>> compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2,max_total_wal_size=1073741824
>>>>>> 2021-10-06T22:01:32.715+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_super_meta old nid_max 167450
>>>>>> 2021-10-06T22:01:32.715+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_super_meta old blobid_max 30720
>>>>>> 2021-10-06T22:01:32.715+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_super_meta freelist_type bitmap
>>>>>> 2021-10-06T22:01:32.715+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_super_meta ondisk_format 4 compat_ondisk_format 3
>>>>>> 2021-10-06T22:01:32.715+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_super_meta min_alloc_size 0x1000
>>>>>> 2021-10-06T22:01:33.347+0200 7f87bde3c240  1 freelist init
>>>>>> 2021-10-06T22:01:33.347+0200 7f87bde3c240  1 freelist _read_cfg
>>>>>> 2021-10-06T22:01:33.347+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _init_alloc opening allocation metadata
>>>>>> 2021-10-06T22:01:41.031+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _init_alloc loaded 5.0 TiB in 37191 extents, allocator type hybrid,
>>>>>> capacity 0x57541c00000, block size 0x1000, free 0x502f8f9a000,
>>>>>> fragmentation 2.76445e-05
>>>>>> 2021-10-06T22:01:41.039+0200 7f87bde3c240  1 bluefs umount
>>>>>> 2021-10-06T22:01:41.043+0200 7f87bde3c240  1 bdev(0x5644f056d400
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal) 
>>>>>> close
>>>>>> 2021-10-06T22:01:43.623+0200 7f87bde3c240  1 bdev(0x5644f056cc00
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db) close
>>>>>> 2021-10-06T22:01:54.727+0200 7f87bde3c240  1 bdev(0x5644f056d000
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) close
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bdev(0x5644f056d000
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db) open
>>>>>> path /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bdev(0x5644f056d000
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db) open
>>>>>> size 187904819200 (0x2bc0000000, 175 GiB) block_size 4096 (4 KiB)
>>>>>> rotational discard not supported
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bluefs add_block_device
>>>>>> bdev 1 path
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db size 
>>>>>> 175
>>>>>> GiB
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bdev(0x5644f056cc00
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) open 
>>>>>> path
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bdev(0x5644f056cc00
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) open 
>>>>>> size
>>>>>> 6001172414464 (0x57541c00000, 5.5 TiB) block_size 4096 (4 KiB) rotational
>>>>>> discard not supported
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bluefs add_block_device
>>>>>> bdev 2 path 
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block
>>>>>> size 5.5 TiB
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bdev(0x5644f056d400
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal) open
>>>>>> path /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bdev(0x5644f056d400
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal) open
>>>>>> size 48318382080 (0xb40000000, 45 GiB) block_size 4096 (4 KiB)
>>>>>> non-rotational discard supported
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bluefs add_block_device
>>>>>> bdev 0 path
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal size 
>>>>>> 45
>>>>>> GiB
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bluefs mount
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bluefs _init_alloc new,
>>>>>> id 0, allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>> 0xb40000000, block size 0x100000
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bluefs _init_alloc new,
>>>>>> id 1, allocator name bluefs-db, allocator type hybrid, capacity
>>>>>> 0x2bc0000000, block size 0x100000
>>>>>> 2021-10-06T22:01:54.995+0200 7f87bde3c240  1 bluefs _init_alloc
>>>>>> shared, id 2, capacity 0x57541c00000, block size 0x10000
>>>>>> 2021-10-06T22:01:55.079+0200 7f87bde3c240  1 bluefs mount
>>>>>> shared_bdev_used = 0
>>>>>> 2021-10-06T22:01:55.079+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _prepare_db_environment set db_paths to db,178509578240
>>>>>> db.slow,5701113793740
>>>>>> 2021-10-06T22:09:36.519+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/)
>>>>>> _open_db opened rocksdb path db options
>>>>>> compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,recycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152,max_background_compactions=2,max_total_wal_size=1073741824
>>>>>> 2021-10-06T22:09:54.067+0200 7f87bde3c240  1
>>>>>> bluestore(/var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8/) 
>>>>>> umount
>>>>>> 2021-10-06T22:09:54.079+0200 7f87bde3c240  1 bluefs umount
>>>>>> 2021-10-06T22:09:54.079+0200 7f87bde3c240  1 bdev(0x5644f056d400
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.wal) 
>>>>>> close
>>>>>> 2021-10-06T22:09:56.612+0200 7f87bde3c240  1 bdev(0x5644f056d000
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block.db) close
>>>>>> 2021-10-06T22:10:07.520+0200 7f87bde3c240  1 bdev(0x5644f056cc00
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) close
>>>>>> 2021-10-06T22:10:07.688+0200 7f87bde3c240  1 freelist shutdown
>>>>>> 2021-10-06T22:10:07.692+0200 7f87bde3c240  1 bdev(0x5644f056c800
>>>>>> /var/lib/ceph/462c44b4-eed6-11eb-8b2c-a1ad45f88a97/osd.8//block) close
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:~# ceph orch daemon start osd.8
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:/# ceph -s
>>>>>>   cluster:
>>>>>>     id:     462c44b4-eed6-11eb-8b2c-a1ad45f88a97
>>>>>>     health: HEALTH_OK
>>>>>>
>>>>>>   services:
>>>>>>     mon:         3 daemons, quorum
>>>>>> cd133-ceph-mon-01,cd88-ceph-mon-01,cd133k-ceph-mon-01 (age 15h)
>>>>>>     mgr:         cd133-ceph-mon-01.mzapob(active, since 15h),
>>>>>> standbys: cd133k-ceph-mon-01.imikwh
>>>>>>     osd:         60 osds: 60 up (since 2m), 60 in (since 3h)
>>>>>>     rgw:         4 daemons active (2 hosts, 1 zones)
>>>>>>     tcmu-runner: 10 portals active (2 hosts)
>>>>>>
>>>>>>   data:
>>>>>>     pools:   6 pools, 361 pgs
>>>>>>     objects: 2.46M objects, 8.0 TiB
>>>>>>     usage:   33 TiB used, 304 TiB / 338 TiB avail
>>>>>>     pgs:     361 active+clean
>>>>>>
>>>>>>   io:
>>>>>>     client:   45 MiB/s rd, 50 MiB/s wr, 921 op/s rd, 674 op/s wr
>>>>>>
>>>>>>
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.8 bluestore bluefs device
>>>>>> info
>>>>>> {
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_WAL",
>>>>>>         "total": 48318377984,
>>>>>>
>>>>>> *        "free": 41354784768, *        "bluefs_used": 6963593216
>>>>>>     },
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_DB",
>>>>>>         "total": 187904811008,
>>>>>>         "free": 187302928384,
>>>>>>         "bluefs_used": 601882624
>>>>>>     },
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_SLOW",
>>>>>>         "total": 6001172414464,
>>>>>>         "free": 5507531620352,
>>>>>>         "bluefs_used": 0,
>>>>>>         "bluefs max available": 5505566572544
>>>>>>     }
>>>>>> }
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.8 bluefs stats
>>>>>> 0 : device size 0xb3ffff000 : using 0x1a0c00000(6.5 GiB)
>>>>>> 1 : device size 0x2bbfffe000 : using 0x23e00000(574 MiB)
>>>>>> 2 : device size 0x57541c00000 : using 0x72f0803000(460 GiB)
>>>>>> RocksDBBlueFSVolumeSelector: wal_total:45902462976,
>>>>>> db_total:178509578240, slow_total:5701113793740, db_avail:103884521472
>>>>>> Usage matrix:
>>>>>> DEV/LEV     WAL         DB          SLOW        *           *
>>>>>>   REAL        FILES
>>>>>> LOG         12 MiB      18 MiB      0 B         0 B         0 B
>>>>>>   10 MiB      0
>>>>>> WAL         6.5 GiB     0 B         0 B         0 B         0 B
>>>>>>   6.4 GiB     102
>>>>>> DB          0 B         573 MiB     0 B         0 B         0 B
>>>>>>   557 MiB     22
>>>>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>   0 B         0
>>>>>> TOTALS      6.5 GiB     591 MiB     0 B         0 B         0 B
>>>>>>   0 B         125
>>>>>> MAXIMUMS:
>>>>>> LOG         12 MiB      18 MiB      0 B         0 B         0 B
>>>>>>   17 MiB
>>>>>> WAL         45 GiB      101 GiB     0 B         0 B         0 B
>>>>>>   145 GiB
>>>>>> DB          0 B         688 MiB     0 B         0 B         0 B
>>>>>>   670 MiB
>>>>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>   0 B
>>>>>> TOTALS      45 GiB      101 GiB     0 B         0 B         0 B
>>>>>>   0 B
>>>>>>
>>>>>> ----
>>>>>>
>>>>>>
>>>>>> Here are the osd.2... the problem disapeared from alone
>>>>>> very strange...
>>>>>>
>>>>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.2 bluefs stats
>>>>>> 0 : device size 0xb3ffff000 : using 0x7bcc00000(31 GiB)
>>>>>> 1 : device size 0x2bbfffe000 : using 0x458c00000(17 GiB)
>>>>>> 2 : device size 0x57541c00000 : using 0x5cd3665000(371 GiB)
>>>>>> RocksDBBlueFSVolumeSelector: wal_total:45902462976,
>>>>>> db_total:178509578240, slow_total:5701113793740, db_avail:103884521472
>>>>>> Usage matrix:
>>>>>> DEV/LEV     WAL         DB          SLOW        *           *
>>>>>>   REAL        FILES
>>>>>> LOG         920 MiB     4.0 GiB     0 B         0 B         0 B
>>>>>>   10 MiB      1
>>>>>> WAL         31 GiB      17 GiB      0 B         0 B         0 B
>>>>>>   48 GiB      765
>>>>>> DB          0 B         193 MiB     0 B         0 B         0 B
>>>>>>   175 MiB     30
>>>>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>   0 B         0
>>>>>> TOTALS      32 GiB      21 GiB      0 B         0 B         0 B
>>>>>>   0 B         796
>>>>>> MAXIMUMS:
>>>>>> LOG         920 MiB     4.0 GiB     0 B         0 B         0 B
>>>>>>   17 MiB
>>>>>> WAL         45 GiB      149 GiB     0 B         0 B         0 B
>>>>>>   192 GiB
>>>>>> DB          0 B         762 MiB     0 B         0 B         0 B
>>>>>>   741 MiB
>>>>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>   0 B
>>>>>> TOTALS      45 GiB      153 GiB     0 B         0 B         0 B
>>>>>>   0 B
>>>>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.2 bluestore bluefs device
>>>>>> info
>>>>>> {
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_WAL",
>>>>>>         "total": 48318377984,
>>>>>>         "free": 15043915776,
>>>>>>         "bluefs_used": 33274462208
>>>>>>     },
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_DB",
>>>>>>         "total": 187904811008,
>>>>>>         "free": 169235963904,
>>>>>>         "bluefs_used": 18668847104
>>>>>>     },
>>>>>>     "dev": {
>>>>>>         "device": "BDEV_SLOW",
>>>>>>         "total": 6001172414464,
>>>>>>         "free": 5602453327872,
>>>>>>         "bluefs_used": 0,
>>>>>>         "bluefs max available": 5600865222656
>>>>>>     }
>>>>>> }
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Am Mi., 6. Okt. 2021 um 18:11 Uhr schrieb Igor Fedotov <
>>>>>> igor.fedo...@croit.io>:
>>>>>>
>>>>>>>
>>>>>>> On 10/6/2021 4:25 PM, José H. Freidhof wrote:
>>>>>>> > hi,
>>>>>>> >
>>>>>>> > no risk no fun 😂 okay
>>>>>>> >   I have reset the settings you mentioned to standard.
>>>>>>> >
>>>>>>> > what you exactly mean with taking offline the osd? ceph orch
>>>>>>> daemon stop
>>>>>>> > osd.2? or mark down?
>>>>>>> "daemon stop" is enough. You  might want to set noout flag before
>>>>>>> that
>>>>>>> though...
>>>>>>> >
>>>>>>> > for the command which path i use? you mean:
>>>>>>> >
>>>>>>> > bluestore-kv /var/lib/ceph/$fsid/osd.2 compact???
>>>>>>> yep
>>>>>>> >
>>>>>>> >
>>>>>>> > Igor Fedotov <ifedo...@suse.de> schrieb am Mi., 6. Okt. 2021,
>>>>>>> 13:33:
>>>>>>> >
>>>>>>> >> On 10/6/2021 2:16 PM, José H. Freidhof wrote:
>>>>>>> >>> Hi Igor,
>>>>>>> >>>
>>>>>>> >>> yes i have some osd settings set :-) here are my ceph config
>>>>>>> dump. those
>>>>>>> >>> settings are from a redhat document for bluestore devices
>>>>>>> >>> maybe it is that setting causing this problem? "advanced
>>>>>>> >>>    mon_compact_on_trim    false"???
>>>>>>> >> OMG!!!
>>>>>>> >>
>>>>>>> >> No - mon_compact_on_trim has nothing to deal with bluestore.
>>>>>>> >>
>>>>>>> >> Highly likely it's bluestore_rocksdb_options which hurts...
>>>>>>> >> Documentations tend to fall behind the best practices.... I would
>>>>>>> >> strongly discourage you from using non-default settings unless
>>>>>>> it's
>>>>>>> >> absolutely clear why this is necessary.
>>>>>>> >>
>>>>>>> >> Even at the first glance the following settings (just a few ones
>>>>>>> I'm
>>>>>>> >> completely aware) are suboptimal/non-recommended:
>>>>>>> >>
>>>>>>> >> rocksdb_perf
>>>>>>> >>
>>>>>>> >> bluefs_sync_write
>>>>>>> >>
>>>>>>> >> bluefs_csum_type
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> Not to mention bluestore_rocksdb_options which hasn't got much
>>>>>>> adoption
>>>>>>> >> so far and apparently greatly alters rocksdb behavior...
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> So I would suggest to revert rocksdb options back to default, run
>>>>>>> the
>>>>>>> >> compaction and if it succeeds monitor the OSD for a while. Then
>>>>>>> if it
>>>>>>> >> works fine - apply the same for others
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> Hope this helps,
>>>>>>> >>
>>>>>>> >> Igor
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>> i will test it this afternoon... at the moment are everything
>>>>>>> semi
>>>>>>> >>> prodcuctive and i need to repair one osd node.. because i think
>>>>>>> of this
>>>>>>> >>> reason the osds crashed on the node and the osd container
>>>>>>> crashes with a
>>>>>>> >>> dump while coming up now.
>>>>>>> >>> need first to replicate all between all three nodes and then i
>>>>>>> can take
>>>>>>> >>> offline the osd.2.and test your command. i will inform you
>>>>>>> later...
>>>>>>> >>>
>>>>>>> >>> root@cd88-ceph-osdh-01:/# ceph config dump
>>>>>>> >>> WHO                                               MASK
>>>>>>> >>>    LEVEL     OPTION                                       VALUE
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      RO
>>>>>>> >>> global
>>>>>>> >>>    advanced  leveldb_max_open_files                       131072
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  mon_compact_on_trim                          false
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    dev       ms_crc_data                                  false
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  osd_deep_scrub_interval
>>>>>>> 1209600.000000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  osd_max_scrubs                               16
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  osd_scrub_load_threshold
>>>>>>>  0.010000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  osd_scrub_max_interval
>>>>>>>  1209600.000000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  osd_scrub_min_interval
>>>>>>>  86400.000000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  perf                                         true
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  rbd_readahead_disable_after_bytes            0
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  rbd_readahead_max_bytes                      4194304
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  rocksdb_perf                                 true
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> global
>>>>>>> >>>    advanced  throttler_perf_counter                       false
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     mon
>>>>>>> >>> advanced  auth_allow_insecure_global_id_reclaim        false
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     mon
>>>>>>> >>> advanced  cluster_network
>>>>>>> 10.50.50.0/24
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      *
>>>>>>> >>>     mon
>>>>>>> >>> advanced  mon_osd_down_out_interval                    300
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     mon
>>>>>>> >>> advanced  public_network
>>>>>>> 10.50.50.0/24
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      *
>>>>>>> >>>     mgr
>>>>>>> >>> advanced  mgr/cephadm/container_init                   True
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                     *
>>>>>>> >>>     mgr
>>>>>>> >>> advanced  mgr/cephadm/device_enhanced_scan             true
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                     *
>>>>>>> >>>     mgr
>>>>>>> >>> advanced  mgr/cephadm/migration_current                2
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      *
>>>>>>> >>>     mgr
>>>>>>> >>> advanced  mgr/cephadm/warn_on_stray_daemons            false
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      *
>>>>>>> >>>     mgr
>>>>>>> >>> advanced  mgr/cephadm/warn_on_stray_hosts              false
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      *
>>>>>>> >>>     osd
>>>>>>> >>> advanced  bluefs_sync_write                            true
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_cache_autotune                     true
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_cache_kv_ratio                     0.200000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_cache_meta_ratio                   0.800000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_cache_size                         2147483648
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_cache_size_hdd                     2147483648
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  bluestore_csum_type                          none
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_extent_map_shard_max_size          200
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_extent_map_shard_min_size          50
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       bluestore_extent_map_shard_target_size       100
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  bluestore_rocksdb_options
>>>>>>> >>>
>>>>>>> >>
>>>>>>>  
>>>>>>> compression=kNoCompression,max_write_buffer_number=64,min_write_buffer_number_to_merge=32,recycle_log_file_num=64,compaction_style=kCompactionStyleLevel,write_buffer_size=4MB,target_file_size_base=4MB,max_background_compactions=64,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=32,flusher_threads=8,compaction_readahead_size=2MB
>>>>>>> >>>    *
>>>>>>> >>>     osd
>>>>>>> >>> advanced  mon_osd_cache_size                           1024
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       ms_crc_data                                  false
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_map_share_max_epochs                     5
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_max_backfills                            1
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       osd_max_pg_log_entries                       10
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       osd_memory_cache_min                         3000000000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>   host:cd133-ceph-osdh-01
>>>>>>> >>> basic     osd_memory_target                            5797322383
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>   host:cd133k-ceph-osdh-01
>>>>>>> >>>    basic     osd_memory_target
>>>>>>> 9402402385
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>>  host:cd88-ceph-osdh-01
>>>>>>> >>>    basic     osd_memory_target
>>>>>>> 5797322096
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_memory_target_autotune                   true
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       osd_min_pg_log_entries                       10
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_op_num_shards                            8
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      *
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_op_num_threads_per_shard                 2
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>                                      *
>>>>>>> >>>     osd
>>>>>>> >>> dev       osd_pg_log_dups_tracked                      10
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> dev       osd_pg_log_trim_min                          10
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_recovery_max_active                      3
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_recovery_max_single_start                1
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>     osd
>>>>>>> >>> advanced  osd_recovery_sleep                           0.000000
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> Am Mi., 6. Okt. 2021 um 12:55 Uhr schrieb Igor Fedotov <
>>>>>>> ifedo...@suse.de
>>>>>>> >>> :
>>>>>>> >>>
>>>>>>> >>>> Jose,
>>>>>>> >>>>
>>>>>>> >>>> In fact 48GB is a way too much for WAL drive - usually the
>>>>>>> write ahead
>>>>>>> >> log
>>>>>>> >>>> tend to be 2-4 GBs.
>>>>>>> >>>>
>>>>>>> >>>> But in your case it's ~150GB, while DB itself is very small
>>>>>>> (146MB!!!):
>>>>>>> >>>>
>>>>>>> >>>> WAL         45 GiB      111 GiB     0 B         0 B         0 B
>>>>>>> >>>> 154 GiB     2400
>>>>>>> >>>>
>>>>>>> >>>> DB          0 B         164 MiB     0 B         0 B         0 B
>>>>>>> >>>> 146 MiB     30
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> which means that there are some issues with RocksDB's WAL
>>>>>>> processing,
>>>>>>> >>>> which needs some troubleshooting...
>>>>>>> >>>>
>>>>>>> >>>> Curious if other OSDs are suffering from the same and whether
>>>>>>> you have
>>>>>>> >> any
>>>>>>> >>>> custom settings for your OSD(s)?
>>>>>>> >>>>
>>>>>>> >>>> Additionally you might want to try the following command to
>>>>>>> compact this
>>>>>>> >>>> specific OSD manually and check if this would normalize the DB
>>>>>>> layout -
>>>>>>> >> the
>>>>>>> >>>> majority of data has to be at DB level not WAL. Please share the
>>>>>>> >> resulting
>>>>>>> >>>> layout (reported by "ceph daemon osd.2 bluefs stats" command)
>>>>>>> after the
>>>>>>> >>>> compaction is fulfiled and OSD is restarted.
>>>>>>> >>>>
>>>>>>> >>>> The compaction command to be applied on an offline OSD:
>>>>>>> >> "ceph-kvstore-tool
>>>>>>> >>>> bluestore-kv <path-to-osd> compact"
>>>>>>> >>>>
>>>>>>> >>>> Even if the above works great please refrain from applying that
>>>>>>> >> compaction
>>>>>>> >>>> to every OSD - let's see how that "compacted" OSD evolves.Would
>>>>>>> WAL grow
>>>>>>> >>>> again or not?
>>>>>>> >>>>
>>>>>>> >>>> Thanks,
>>>>>>> >>>>
>>>>>>> >>>> Igor
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> On 10/6/2021 1:35 PM, José H. Freidhof wrote:
>>>>>>> >>>>
>>>>>>> >>>> Hello Igor,
>>>>>>> >>>>
>>>>>>> >>>> yes the volume is  nvme wal partitions for the bluestore
>>>>>>> devicegroups
>>>>>>> >> are
>>>>>>> >>>> only 48gb each
>>>>>>> >>>>
>>>>>>> >>>> on each osd node are 1 nvme with 1tb splitted in 20 lvs with
>>>>>>> 48gb (WAL)
>>>>>>> >>>> on each osd node are 4 ssd with 1tb splitted in 5 lvs with 175gb
>>>>>>> >> (rock.db)
>>>>>>> >>>> on each osd node are 20 hdd with 5.5tb with 1 lvs (block.db)
>>>>>>> >>>>
>>>>>>> >>>> each blustore have 1 partition nvme,ssd and hdd like described
>>>>>>> in the
>>>>>>> >>>> documentation
>>>>>>> >>>>
>>>>>>> >>
>>>>>>> https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/
>>>>>>> >>>> is this to small or can i adjust the max allocation on the wal
>>>>>>> nvme
>>>>>>> >> device
>>>>>>> >>>> in the ceph configuration?
>>>>>>> >>>> i know that the ssd and nvme are to small for those 5.5tb
>>>>>>> disk... its 1%
>>>>>>> >>>> only ot the rotation disk.
>>>>>>> >>>> i am new in ceph and still or always learning, but we are in a
>>>>>>> little
>>>>>>> >>>> hurry because our other datastores are old and full.
>>>>>>> >>>>
>>>>>>> >>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.2 bluestore bluefs
>>>>>>> device
>>>>>>> >> info
>>>>>>> >>>> {
>>>>>>> >>>>       "dev": {
>>>>>>> >>>>           "device": "BDEV_WAL",
>>>>>>> >>>>           "total": 48318377984,
>>>>>>> >>>>           "free": 1044480,
>>>>>>> >>>>           "bluefs_used": 48317333504
>>>>>>> >>>>       },
>>>>>>> >>>>       "dev": {
>>>>>>> >>>>           "device": "BDEV_DB",
>>>>>>> >>>>           "total": 187904811008,
>>>>>>> >>>>           "free": 68757217280,
>>>>>>> >>>>           "bluefs_used": 119147593728
>>>>>>> >>>>       },
>>>>>>> >>>>       "dev": {
>>>>>>> >>>>           "device": "BDEV_SLOW",
>>>>>>> >>>>           "total": 6001172414464,
>>>>>>> >>>>           "free": 5624912359424,
>>>>>>> >>>>           "bluefs_used": 0,
>>>>>>> >>>>           "bluefs max available": 5624401231872
>>>>>>> >>>>       }
>>>>>>> >>>> }
>>>>>>> >>>> root@cd88-ceph-osdh-01:/# ceph daemon osd.2 bluefs stats
>>>>>>> >>>> 0 : device size 0xb3ffff000 : using 0xb3ff00000(45 GiB)
>>>>>>> >>>> 1 : device size 0x2bbfffe000 : using 0x1bbeb00000(111 GiB)
>>>>>>> >>>> 2 : device size 0x57541c00000 : using 0x579b592000(350 GiB)
>>>>>>> >>>> RocksDBBlueFSVolumeSelector: wal_total:45902462976,
>>>>>>> >> db_total:178509578240,
>>>>>>> >>>> slow_total:5701113793740, db_avail:103884521472
>>>>>>> >>>> Usage matrix:
>>>>>>> >>>> DEV/LEV     WAL         DB          SLOW        *           *
>>>>>>> >>>> REAL        FILES
>>>>>>> >>>> LOG         124 MiB     2.3 GiB     0 B         0 B         0 B
>>>>>>> >>>> 7.5 MiB     1
>>>>>>> >>>> WAL         45 GiB      111 GiB     0 B         0 B         0 B
>>>>>>> >>>> 154 GiB     2400
>>>>>>> >>>> DB          0 B         164 MiB     0 B         0 B         0 B
>>>>>>> >>>> 146 MiB     30
>>>>>>> >>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>> >>   0
>>>>>>> >>>> B         0
>>>>>>> >>>> TOTALS      45 GiB      113 GiB     0 B         0 B         0 B
>>>>>>> >>   0
>>>>>>> >>>> B         2431
>>>>>>> >>>> MAXIMUMS:
>>>>>>> >>>> LOG         124 MiB     2.3 GiB     0 B         0 B         0 B
>>>>>>> >>   17
>>>>>>> >>>> MiB
>>>>>>> >>>> WAL         45 GiB      149 GiB     0 B         0 B         0 B
>>>>>>> >>>> 192 GiB
>>>>>>> >>>> DB          0 B         762 MiB     0 B         0 B         0 B
>>>>>>> >>>> 741 MiB
>>>>>>> >>>> SLOW        0 B         0 B         0 B         0 B         0 B
>>>>>>> >>   0 B
>>>>>>> >>>> TOTALS      45 GiB      150 GiB     0 B         0 B         0 B
>>>>>>> >>   0 B
>>>>>>> >>>> Am Mi., 6. Okt. 2021 um 11:45 Uhr schrieb Igor Fedotov <
>>>>>>> >> ifedo...@suse.de>:
>>>>>>> >>>>> Hey Jose,
>>>>>>> >>>>>
>>>>>>> >>>>> it looks like your WAL volume is out of space which looks
>>>>>>> weird given
>>>>>>> >>>>> its capacity = 48Gb.
>>>>>>> >>>>>
>>>>>>> >>>>> Could you please share the output of the following commands:
>>>>>>> >>>>>
>>>>>>> >>>>> ceph daemon osd.N bluestore bluefs device info
>>>>>>> >>>>>
>>>>>>> >>>>> ceph daemon osd.N bluefs stats
>>>>>>> >>>>>
>>>>>>> >>>>>
>>>>>>> >>>>> Thanks,
>>>>>>> >>>>>
>>>>>>> >>>>> Igor
>>>>>>> >>>>>
>>>>>>> >>>>>
>>>>>>> >>>>> On 10/6/2021 12:24 PM, José H. Freidhof wrote:
>>>>>>> >>>>>> Hello together
>>>>>>> >>>>>>
>>>>>>> >>>>>> we have a running ceph pacific 16.2.5 cluster and i found this
>>>>>>> >> messages
>>>>>>> >>>>> in
>>>>>>> >>>>>> the service logs of the osd daemons.
>>>>>>> >>>>>>
>>>>>>> >>>>>> we have three osd nodes .. each node has 20osds as bluestore
>>>>>>> with
>>>>>>> >>>>>> nvme/ssd/hdd
>>>>>>> >>>>>>
>>>>>>> >>>>>> is this a bug or maybe i have some settings wrong?
>>>>>>> >>>>>>
>>>>>>> >>>>>>
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:25.821+0000
>>>>>>> >>>>>> 7f38eebd4700  1 bluefs _allocate unable to allocate 0x100000
>>>>>>> on bdev
>>>>>>> >> 0,
>>>>>>> >>>>>> allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>>> >> 0xb40000000,
>>>>>>> >>>>>> block size 0x100000, free 0xff000, fragmentation 0, allocated
>>>>>>> 0x0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:29.857+0000
>>>>>>> >>>>>> 7f38eebd4700  1 bluefs _allocate unable to allocate 0x100000
>>>>>>> on bdev
>>>>>>> >> 0,
>>>>>>> >>>>>> allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>>> >> 0xb40000000,
>>>>>>> >>>>>> block size 0x100000, free 0xff000, fragmentation 0, allocated
>>>>>>> 0x0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.073+0000
>>>>>>> >>>>>> 7f38eebd4700  1 bluefs _allocate unable to allocate 0x400000
>>>>>>> on bdev
>>>>>>> >> 0,
>>>>>>> >>>>>> allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>>> >> 0xb40000000,
>>>>>>> >>>>>> block size 0x100000, free 0xff000, fragmentation 0, allocated
>>>>>>> 0x0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.405+0000
>>>>>>> >>>>>> 7f38eebd4700  1 bluefs _allocate unable to allocate 0x100000
>>>>>>> on bdev
>>>>>>> >> 0,
>>>>>>> >>>>>> allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>>> >> 0xb40000000,
>>>>>>> >>>>>> block size 0x100000, free 0xff000, fragmentation 0, allocated
>>>>>>> 0x0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.465+0000
>>>>>>> >>>>>> 7f38eebd4700  1 bluefs _allocate unable to allocate 0x100000
>>>>>>> on bdev
>>>>>>> >> 0,
>>>>>>> >>>>>> allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>>> >> 0xb40000000,
>>>>>>> >>>>>> block size 0x100000, free 0xff000, fragmentation 0, allocated
>>>>>>> 0x0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.529+0000
>>>>>>> >>>>>> 7f38eebd4700  1 bluefs _allocate unable to allocate 0x100000
>>>>>>> on bdev
>>>>>>> >> 0,
>>>>>>> >>>>>> allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>>> >> 0xb40000000,
>>>>>>> >>>>>> block size 0x100000, free 0xff000, fragmentation 0, allocated
>>>>>>> 0x0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.545+0000
>>>>>>> >>>>>> 7f38eebd4700  4 rocksdb: [db_impl/db_impl_write.cc:1668] [L]
>>>>>>> New
>>>>>>> >>>>> memtable
>>>>>>> >>>>>> created with log file: #9588. Immutable memtables: 1.
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.545+0000
>>>>>>> >>>>>> 7f38eebd4700  1 bluefs _allocate unable to allocate 0x100000
>>>>>>> on bdev
>>>>>>> >> 0,
>>>>>>> >>>>>> allocator name bluefs-wal, allocator type hybrid, capacity
>>>>>>> >> 0xb40000000,
>>>>>>> >>>>>> block size 0x100000, free 0xff000, fragmentation 0, allocated
>>>>>>> 0x0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.545+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.547575)
>>>>>>> >>>>>> [db_impl/db_impl_compaction_flush.cc:2198] Calling
>>>>>>> >>>>>> FlushMemTableToOutputFile with column family [L], flush slots
>>>>>>> >> available
>>>>>>> >>>>> 1,
>>>>>>> >>>>>> compaction slots available 1, flush slots scheduled 1,
>>>>>>> compaction
>>>>>>> >> slots
>>>>>>> >>>>>> scheduled 0
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.545+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: [flush_job.cc:321] [L] [JOB 5709]
>>>>>>> Flushing
>>>>>>> >>>>>> memtable with next log file: 9587
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.545+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: [flush_job.cc:321] [L] [JOB 5709]
>>>>>>> Flushing
>>>>>>> >>>>>> memtable with next log file: 9588
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.545+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850547916,
>>>>>>> >>>>>> "job": 5709, "event": "flush_started", "num_memtables": 2,
>>>>>>> >>>>> "num_entries":
>>>>>>> >>>>>> 4146, "num_deletes": 0, "total_data_size": 127203926,
>>>>>>> "memory_usage":
>>>>>>> >>>>>> 130479920, "flush_reason": "Write Buffer Full"}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.545+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: [flush_job.cc:350] [L] [JOB 5709]
>>>>>>> Level-0
>>>>>>> >> flush
>>>>>>> >>>>>> table #9589: started
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850559292,
>>>>>>> >>>>>> "cf_name": "L", "job": 5709, "event": "table_file_creation",
>>>>>>> >>>>> "file_number":
>>>>>>> >>>>>> 9589, "file_size": 3249934, "table_properties": {"data_size":
>>>>>>> 3247855,
>>>>>>> >>>>>> "index_size": 1031, "index_partitions": 0,
>>>>>>> "top_level_index_size": 0,
>>>>>>> >>>>>> "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0,
>>>>>>> >>>>>> "filter_size": 197, "raw_key_size": 1088,
>>>>>>> "raw_average_key_size": 16,
>>>>>>> >>>>>> "raw_value_size": 3246252, "raw_average_value_size": 47739,
>>>>>>> >>>>>> "num_data_blocks": 36, "num_entries": 68, "num_deletions": 32,
>>>>>>> >>>>>> "num_merge_operands": 0, "num_range_deletions": 0,
>>>>>>> "format_version":
>>>>>>> >> 0,
>>>>>>> >>>>>> "fixed_key_len": 0, "filter_policy":
>>>>>>> "rocksdb.BuiltinBloomFilter",
>>>>>>> >>>>>> "column_family_name": "L", "column_family_id": 10,
>>>>>>> "comparator":
>>>>>>> >>>>>> "leveldb.BytewiseComparator", "merge_operator": "nullptr",
>>>>>>> >>>>>> "prefix_extractor_name": "nullptr", "property_collectors":
>>>>>>> "[]",
>>>>>>> >>>>>> "compression": "NoCompression", "compression_options":
>>>>>>> >> "window_bits=-14;
>>>>>>> >>>>>> level=32767; strategy=0; max_dict_bytes=0;
>>>>>>> zstd_max_train_bytes=0;
>>>>>>> >>>>>> enabled=0; ", "creation_time": 1633511730, "oldest_key_time":
>>>>>>> >>>>> 1633511730,
>>>>>>> >>>>>> "file_creation_time": 1633511850}}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: [flush_job.cc:401] [L] [JOB 5709]
>>>>>>> Level-0
>>>>>>> >> flush
>>>>>>> >>>>>> table #9589: 3249934 bytes OK
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.559362)
>>>>>>> >>>>>> [memtable_list.cc:447] [L] Level-0 commit table #9589 started
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.559583)
>>>>>>> >>>>>> [memtable_list.cc:503] [L] Level-0 commit table #9589:
>>>>>>> memtable #1
>>>>>>> >> done
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.559586)
>>>>>>> >>>>>> [memtable_list.cc:503] [L] Level-0 commit table #9589:
>>>>>>> memtable #2
>>>>>>> >> done
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.559601)
>>>>>>> >>>>>> EVENT_LOG_v1 {"time_micros": 1633511850559593, "job": 5709,
>>>>>>> "event":
>>>>>>> >>>>>> "flush_finished", "output_compression": "NoCompression",
>>>>>>> "lsm_state":
>>>>>>> >>>>> [8,
>>>>>>> >>>>>> 1, 0, 0, 0, 0, 0], "immutable_memtables": 0}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f3905c02700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.559638)
>>>>>>> >>>>>> [db_impl/db_impl_compaction_flush.cc:205] [L] Level summary:
>>>>>>> files[8 1
>>>>>>> >>>>> 0 0
>>>>>>> >>>>>> 0 0 0] max score 1.00
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: [compaction/compaction_job.cc:1676]
>>>>>>> [L] [JOB
>>>>>>> >>>>> 5710]
>>>>>>> >>>>>> Compacting 8@0 + 1@1 files to L1, score 1.00
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: [compaction/compaction_job.cc:1680]
>>>>>>> [L]
>>>>>>> >>>>> Compaction
>>>>>>> >>>>>> start summary: Base version 3090 Base level 0, inputs:
>>>>>>> [9589(3173KB)
>>>>>>> >>>>>> 9586(4793KB) 9583(1876KB) 9580(194KB) 9576(6417KB)
>>>>>>> 9573(1078KB)
>>>>>>> >>>>> 9570(405KB)
>>>>>>> >>>>>> 9567(29KB)], [9564(1115KB)]
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.557+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850559956,
>>>>>>> >>>>>> "job": 5710, "event": "compaction_started",
>>>>>>> "compaction_reason":
>>>>>>> >>>>>> "LevelL0FilesNum", "files_L0": [9589, 9586, 9583, 9580, 9576,
>>>>>>> 9573,
>>>>>>> >>>>> 9570,
>>>>>>> >>>>>> 9567], "files_L1": [9564], "score": 1, "input_data_size":
>>>>>>> 19542092}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: [compaction/compaction_job.cc:1349]
>>>>>>> [L] [JOB
>>>>>>> >>>>> 5710]
>>>>>>> >>>>>> Generated table #9590: 36 keys, 3249524 bytes
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850582987,
>>>>>>> >>>>>> "cf_name": "L", "job": 5710, "event": "table_file_creation",
>>>>>>> >>>>> "file_number":
>>>>>>> >>>>>> 9590, "file_size": 3249524, "table_properties": {"data_size":
>>>>>>> 3247449,
>>>>>>> >>>>>> "index_size": 1031, "index_partitions": 0,
>>>>>>> "top_level_index_size": 0,
>>>>>>> >>>>>> "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0,
>>>>>>> >>>>>> "filter_size": 197, "raw_key_size": 576,
>>>>>>> "raw_average_key_size": 16,
>>>>>>> >>>>>> "raw_value_size": 3246252, "raw_average_value_size": 90173,
>>>>>>> >>>>>> "num_data_blocks": 36, "num_entries": 36, "num_deletions": 0,
>>>>>>> >>>>>> "num_merge_operands": 0, "num_range_deletions": 0,
>>>>>>> "format_version":
>>>>>>> >> 0,
>>>>>>> >>>>>> "fixed_key_len": 0, "filter_policy":
>>>>>>> "rocksdb.BuiltinBloomFilter",
>>>>>>> >>>>>> "column_family_name": "L", "column_family_id": 10,
>>>>>>> "comparator":
>>>>>>> >>>>>> "leveldb.BytewiseComparator", "merge_operator": "nullptr",
>>>>>>> >>>>>> "prefix_extractor_name": "nullptr", "property_collectors":
>>>>>>> "[]",
>>>>>>> >>>>>> "compression": "NoCompression", "compression_options":
>>>>>>> >> "window_bits=-14;
>>>>>>> >>>>>> level=32767; strategy=0; max_dict_bytes=0;
>>>>>>> zstd_max_train_bytes=0;
>>>>>>> >>>>>> enabled=0; ", "creation_time": 1633471854, "oldest_key_time":
>>>>>>> 0,
>>>>>>> >>>>>> "file_creation_time": 1633511850}}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: [compaction/compaction_job.cc:1415]
>>>>>>> [L] [JOB
>>>>>>> >>>>> 5710]
>>>>>>> >>>>>> Compacted 8@0 + 1@1 files to L1 => 3249524 bytes
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.583469)
>>>>>>> >>>>>> [compaction/compaction_job.cc:760] [L] compacted to: files[0
>>>>>>> 1 0 0 0 0
>>>>>>> >>>>> 0]
>>>>>>> >>>>>> max score 0.01, MB/sec: 846.1 rd, 140.7 wr, level 1, files
>>>>>>> in(8, 1)
>>>>>>> >>>>> out(1)
>>>>>>> >>>>>> MB in(17.5, 1.1) out(3.1), read-write-amplify(1.2)
>>>>>>> write-amplify(0.2)
>>>>>>> >>>>> OK,
>>>>>>> >>>>>> records in: 376, records dropped: 340 output_compression:
>>>>>>> >> NoCompression
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: (Original Log Time
>>>>>>> >> 2021/10/06-09:17:30.583498)
>>>>>>> >>>>>> EVENT_LOG_v1 {"time_micros": 1633511850583485, "job": 5710,
>>>>>>> "event":
>>>>>>> >>>>>> "compaction_finished", "compaction_time_micros": 23098,
>>>>>>> >>>>>> "compaction_time_cpu_micros": 20039, "output_level": 1,
>>>>>>> >>>>> "num_output_files":
>>>>>>> >>>>>> 1, "total_output_size": 3249524, "num_input_records": 376,
>>>>>>> >>>>>> "num_output_records": 36, "num_subcompactions": 1,
>>>>>>> >> "output_compression":
>>>>>>> >>>>>> "NoCompression", "num_single_delete_mismatches": 0,
>>>>>>> >>>>>> "num_single_delete_fallthrough": 0, "lsm_state": [0, 1, 0, 0,
>>>>>>> 0, 0,
>>>>>>> >> 0]}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583615,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9589}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583648,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9586}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583675,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9583}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583709,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9580}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583739,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9576}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583769,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9573}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583804,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9570}
>>>>>>> >>>>>> cd88-ceph-osdh-01 bash[6283]: debug
>>>>>>> 2021-10-06T09:17:30.581+0000
>>>>>>> >>>>>> 7f38fb3ed700  4 rocksdb: EVENT_LOG_v1 {"time_micros":
>>>>>>> >> 1633511850583835,
>>>>>>> >>>>>> "job": 5710, "event": "table_file_deletion", "file_number":
>>>>>>> 9567}
>>>>>>> >>>>>> _______________________________________________
>>>>>>> >>>>>> ceph-users mailing list -- ceph-users@ceph.io
>>>>>>> >>>>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>>>>>> >>>> --
>>>>>>>
>>>>>>
>>>>>>
>>
>

-- 

Mit freundlichen Grüßen,

 -

José H. Freidhof

Reyerhütterstrasse 130b
41065 Mönchengladbach
eMail: harald.freid...@gmail.com
mobil: +49 (0) 1523 – 717 7801
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to