After a test on a non production environment, we decided to upgrade our
running cluster to jewel 10.2.3. Our cluster has 3 monitors and 8 nodes of
20 disks. The cluster is in hammer 0.94.5 with tunables set to "bobtail".
As the cluster is in production and it wasn't possible to upgrade ceph
client
Hi Vincent,
when i did the upgrade, i've done all clients and servers at the same time. No
issue during the upgrade at all. No downtime.
However, when I set the tunables to optimal i've lost all IO to the clients,
which happened gradually, like over a few hours the iowait went from low figure
Hi Greg, Jonh, Zheng, CephFSers
Maybe a simple question but I think it is better to ask first than to complain
after.
We are currently undergoing an infrastructure migration. One of the first
machines to go through this migration process is our standby-replay mds. We are
running 10.2.2. My pla
Hi Vincent,
When I did a similar upgrade I found that having mixed version OSDs caused
issues much like yours. My advice to you is to power through the upgrade as
fast as possible. Pretty sure this is related to an issue/bug discussed here
previously around excessive load on the monitors in mix
Hi list, can anyone please clarify if the default 'rgw print continue
= true', is supported by civetweb?
I'm using radosgw with civetweb, and this document (may be outdated?)
mentions to install apache,
http://docs.ceph.com/docs/hammer/install/install-ceph-gateway/. This
ticket seems to keep 'prin