[ceph-users] logging with container

2022-03-20 Thread Tony Liu
Hi, After reading through doc, it's still not very clear to me how logging works with container. This is with Pacific v16.2 container. In OSD container, I see this. ``` /usr/bin/ceph-osd -n osd.16 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true

[ceph-users] Re: Fw:Cephfs:Can`t get read/write io size metrics by kernel client

2022-03-20 Thread Xiubo Li
Hi xuchenhuig >       "client_metadata": { >           "client_features": { >               "feature_bits": "0x00ff" >           }, From the above feature_bits, it says you are still using the old kernel, which hasn't support the io read/write metrics yet. Please make sure the

[ceph-users] Re: HELP! Upgrading monitors from 14.2.22 to 16.2.7 immediately crashes in FSMap::decode()

2022-03-20 Thread Tyler Stachecki
What does 'ceph mon dump | grep min_mon_release' say? You're running msgrv2 and all Ceph daemons are talking on v2, since you're on Nautilus, right? Was the cluster conceived on Nautilus, or something earlier? Tyler On Sun, Mar 20, 2022 at 10:30 PM Clippinger, Sam wrote: > > Hello! > > I need

[ceph-users] bind monitoring service to specific network and port

2022-03-20 Thread Tony Liu
Hi, https://docs.ceph.com/en/pacific/cephadm/services/monitoring/#networks-and-ports When I try that with Pacific v16.2 image, port works, network doesn't. No matter which network specified in yaml file, orch apply always bind the service to *. Is this known issue or something I am missing?

[ceph-users] orch apply failed to use insecure private registry

2022-03-20 Thread Tony Liu
Hi, I am using Pacific v16.2 container image. I put images on a insecure private registry. I am using podman and /etc/containers/registries.conf is set with that insecure private registry. "cephadm bootstrap" works fine to pull the image and setup the first node. When "ceph orch apply -i

[ceph-users] Re: octopus (15.2.16) OSDs crash or don't answer heathbeats (and get marked as down)

2022-03-20 Thread Boris Behrens
So, I have tried to remove the OSDs, wipe the disks and sync them back in without block.db SSD. (Still in progress, 212 spinning disks take time to out and in again) And I just experienced them same behavior on one OSD on a host where all disks got synced in new. This disk was marked as in