Hi all, I make a weird observation. 8 out of 12 MDS daemons seem not to report
to the cluster any more:
# ceph fs status
con-fs2 - 1625 clients
===
RANK STATE MDS ACTIVITY DNSINOS
0active ceph-16 Reqs:0 /s 0 0
1active ceph-09 Reqs: 128 /s
That may be the very one I was thinking of, though the OP seemed to be
preserving the IP addresses, so I suspect containerization is in play.
> On Sep 9, 2023, at 11:36 AM, Tyler Stachecki
> wrote:
>
> On Sat, Sep 9, 2023 at 10:48 AM Anthony D'Atri
> wrote:
>> There was also at point an
On Sat, Sep 9, 2023 at 10:48 AM Anthony D'Atri wrote:
> There was also at point an issue where clients wouldn’t get a runtime update
> of new mons.
There's also 8+ year old unresolved bugs like this in OpenStack Cinder
that will bite you if the relocated mons have new IP addresses:
Which Ceph release are you running, and how was it deployed?
With some older releases I experienced mons behaving unexpectedly when one of
the quorum bounced, so I like to segregate them for isolation still.
There was also at point an issue where clients wouldn’t get a runtime update of
new
Hello,
I am interested in the best-practice guidance for the following situation.
There is a Ceph cluster with CephFS deployed. There are three servers
dedicated to running MDS daemons: one active, one standby-replay, and one
standby. There is only a single rank.
Sometimes, servers need to be
Hi,
is it an actual requirement to redeploy MONs? Because almost all
clusters we support or assist with have MONs and OSDs colocated. MON
daemons are quite light-weight services, so if it's not really
necessary, I'd leave it as it is.
If you really need to move the MONs to different
Hi
I
am writing to seek guidance and best practices for a maintenance operation
in my Ceph cluster. I have an older cluster in which the Monitors (Mons)
and Object Storage Devices (OSDs) are currently deployed on the same host.
I am interested in separating them while ensuring zero downtime and