Hi,
For what is worth, we have a similar problem in 16..2.10 that I had no time
to troubleshoot yet. It happened after adding a haproxy in front of rgw to
manage https and switch rgw to http (to overcome the other pb mentioned
when using https in rgw). The access/secret key is refused despite
Casey,
Thanks. This all worked. Some observations and comments for others that may be
in my situation:
1. When deleting the roles on the secondary with radosgw-admin role delete
I had to delete all the policies of each role before I deleted the role
itself.2. radosgw-admin complained w
Hi,
As you might know, I have a problem with MDS not starting. During the
investigation with your help I found another issue that might be related.
I can plan to restart, redeploy, reconfigure services via cephadm or
dashboard just as I want but services won't react. I only see the action
to
On 13/04/2023 18:15, Gilles Mocellin wrote:
I suspect the same origin of our problem in 16.2.11, see Tuesday's thread
"[ceph-users] Pacific dashboard: unable to get RGW information".
https://www.mail-archive.com/ceph-users%40ceph.io/msg19566.html
Unfortunately I don't think it is the same pro
Le jeudi 13 avril 2023, 18:20:27 CEST Chris Palmer a écrit :
> Hi
Hello,
> I have 3 Ceph clusters, all configured similarly, which have been happy
> for some months on 17.2.5:
>
> 1. A test cluster
> 2. A small production cluster
> 3. A larger production cluster
>
> All are debian 11 built
For KVM virtual machines, one of my coworkers worked out a way to live
migrate a VM with its storage to another node on the same cluster with
storage in a different pool. This requires that the VM can be live
migrated to a new host that has access to the same Ceph cluster, and
enough bandwidth
Hi
I have 3 Ceph clusters, all configured similarly, which have been happy
for some months on 17.2.5:
1. A test cluster
2. A small production cluster
3. A larger production cluster
All are debian 11 built from packages - no cephadm.
I upgraded (1) to 17.2.6 without any problems at all. In pa
I've used a similar process with great success for capacity management --
moving volumes from very full clusters to ones with more free space. There was
a weighting system to direct new volumes where there was space, but, to
forestall full ratio problems due to organic growth of existing
thi
Hi,
On 4/12/23 19:09, Work Ceph wrote:
Exactly, I have seen that. However, that also means that it is not a
"process" then, right? Am I missing something?
If we need a live process, where the clients cannot unmpa the volumes, what
do you guys recommend?
We have performed a "live migration" b