[ceph-users] Re: Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-15 Thread Zakhar Kirpichenko
Not sure how it managed to screw up formatting, OSD configuration in a more readable form: https://pastebin.com/mrC6UdzN /Z On Mon, 16 Oct 2023 at 09:26, Zakhar Kirpichenko wrote: > Hi, > > After upgrading to Ceph 16.2.14 we had several OSD crashes > in bstore_kv_sync thread: > > >1. "asser

[ceph-users] Ceph 16.2.14: OSDs randomly crash in bstore_kv_sync

2023-10-15 Thread Zakhar Kirpichenko
Hi, After upgrading to Ceph 16.2.14 we had several OSD crashes in bstore_kv_sync thread: 1. "assert_thread_name": "bstore_kv_sync", 2. "backtrace": [ 3. "/lib64/libpthread.so.0(+0x12cf0) [0x7ff2f6750cf0]", 4. "gsignal()", 5. "abort()", 6. "(ceph::__ceph_assert_fail(char const*,

[ceph-users] Re: Ceph 16.2.14: how to set mon_rocksdb_options to enable RocksDB compression?

2023-10-15 Thread Zakhar Kirpichenko
Out of curiosity, I tried setting mon_rocksdb_options via ceph.conf. This didn't work either: ceph.conf gets overridden at monitor start, the resulting ceph.conf inside the monitor container doesn't have mon_rocksdb_options, the monitor starts with no RocksDB compression. I would appreciate it if

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-15 Thread Patrick Begou
Hi Johan, So it is not O.S. related as you are running Debian and I am running Alma Linux. But I'm surprised why so few people meet this bug. Patrick Le 13/10/2023 à 17:38, Johan a écrit : At home Im running a small cluster, Ceph v17.2.6, Debian 11 Bullseye. I have recently added a new serv