Hi,
I suspect a bug in cephadm to configure ingress service for rgw. Our
production server was upgraded from continuously from Luminous to
Pacific. When configuring ingress service for rgw, the haproxy.cfg is
incomplete. The same yaml file applied on our test cluster does the job.
Regards,
Hi,
Our cluster runs Pacific on Rocky8. We have 3 rgw running on port 7480.
I tried to setup an ingress service with a yaml definition of service:
no luck
service_type: ingress
service_id: rgw.myceph.be
placement:
hosts:
- ceph001
- ceph002
- ceph003
spec:
backend_service: rgw
Hi,
We've already convert two PRODUCTION storage nodes on Octopus to cephadm
without problem.
On the third one, we succeeded to convert only one OSD.
[root@server4 osd]# cephadm adopt --style legacy --name osd.0
Found online OSD at //var/lib/ceph/osd/ceph-0/fsid
objectstore_type is bluestore
Hi,
We are currently upgrading our cluster from Nautilus to Octupus.
After upgrade of the mons and mgrs, we get warnings about the number of PGS.
Which parameter did change during upgrade to explain those new warnings.
Nothing else was changed.
Is it risky to change the pgs/pool as proposed
Hi,
I use a Ceph test infrastructure with only two storage servers running
the OSDs. Objects are replicated between these servers:
[ceph: root@cepht001 /]# ceph osd dump | grep 'replicated size'
pool 1 '.rgw.root' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_nu
Hi,
When you change the configuration of your cluster whith 'ceph orch apply
..." or "ceph orch daemon ...", tasks are scheduled:
[root@cephc003 ~]# ceph orch apply mgr --placement="cephc001 cephc002
cephc003"
Scheduled mgr update...
Is there a way to list all the pending tasks ?
Regards,
Hi,
On my test cluster, I migrated from Nautilus to Octopus and the
converted most of the daemons to cephadm. I got a lot of problem with
podman 1.6.4 on CentOS 7 through an https proxy because my servers are
on a private network.
Now, I'm unable to deploy new managers and the cluster is in