Hi,
After reading through doc, it's still not very clear to me how logging works
with container.
This is with Pacific v16.2 container.
In OSD container, I see this.
```
/usr/bin/ceph-osd -n osd.16 -f --setuser ceph --setgroup ceph
--default-log-to-file=false --default-log-to-stderr=true
Hi xuchenhuig
> "client_metadata": {
> "client_features": {
> "feature_bits": "0x00ff"
> },
From the above feature_bits, it says you are still using the old
kernel, which hasn't support the io read/write metrics yet.
Please make sure the
What does 'ceph mon dump | grep min_mon_release' say? You're running
msgrv2 and all Ceph daemons are talking on v2, since you're on
Nautilus, right?
Was the cluster conceived on Nautilus, or something earlier?
Tyler
On Sun, Mar 20, 2022 at 10:30 PM Clippinger, Sam
wrote:
>
> Hello!
>
> I need
Hi,
https://docs.ceph.com/en/pacific/cephadm/services/monitoring/#networks-and-ports
When I try that with Pacific v16.2 image, port works, network doesn't.
No matter which network specified in yaml file, orch apply always bind the
service to *.
Is this known issue or something I am missing?
Hi,
I am using Pacific v16.2 container image. I put images on a insecure private
registry.
I am using podman and /etc/containers/registries.conf is set with that insecure
private registry.
"cephadm bootstrap" works fine to pull the image and setup the first node.
When "ceph orch apply -i
So,
I have tried to remove the OSDs, wipe the disks and sync them back in
without block.db SSD. (Still in progress, 212 spinning disks take time to
out and in again)
And I just experienced them same behavior on one OSD on a host where all
disks got synced in new. This disk was marked as in