[ceph-users] Re: [ceph v16.2.10] radosgw crash

2023-08-18 Thread 1187873955
Here is the backtrace info in core file (gdb) bt #0 0x7f79da065b7f in raise () from /lib64/libpthread.so.0 #1 0x7f79e5303563 in reraise_fatal (signum=6) at /usr/src/debug/ceph-16.2.10-0.el8.x86_64/src/global/signal_handler.cc:332 #2 handle_fatal_signal (signum=6) at

[ceph-users] Re: Degraded FS on 18.2.0 - two monitors per host????

2023-08-18 Thread Robert W. Eckert
Hi- it settled back to having 4 MDS services, and the file system is up and running. However the 4 MDS services are just on 2 hosts: [root@story ~]# ceph fs status home home - 6 clients RANK STATE MDSACTIVITY DNSINOS DIRS CAPS 0active home.hiho.mssdyh

[ceph-users] Re: Degraded FS on 18.2.0 - two monitors per host????

2023-08-18 Thread Eugen Block
Hi, your subject is "...two monitors per host" but I guess you're asking for MDS daemons per host. ;-) What's the output of 'ceph orch ls mds --export'? You're using 3 active MDS daemons, maybe you set "count_per_host: 2" to have enough standby daemons? I don't think an upgrade would