This also sounds like a possible GlusterFS use case.
Regards,
-Jamie
On Tue, Jun 15, 2021 at 12:30 PM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 15.06.21 16:15, Christoph Brüning wrote:
> > Hi,
> >
> > That's right!
> >
> > We're currently evaluating a
Hi,
On 15.06.21 16:15, Christoph Brüning wrote:
Hi,
That's right!
We're currently evaluating a similar setup with two identical HW nodes
(on two different sites), with OSD, MON and MDS each, and both nodes
have CephFS mounted.
The goal is to build a minimal self-contained shared
Hi,
Thank you guys. I deployed a third monitor and failover works. Thanks you
Le mar. 15 juin 2021 à 16:15, Christoph Brüning <
christoph.bruen...@uni-wuerzburg.de> a écrit :
> Hi,
>
> That's right!
>
> We're currently evaluating a similar setup with two identical HW nodes
> (on two different
Hi,
That's right!
We're currently evaluating a similar setup with two identical HW nodes
(on two different sites), with OSD, MON and MDS each, and both nodes
have CephFS mounted.
The goal is to build a minimal self-contained shared filesystem that
remains online during planned updates and
It's easy. The problem ise OSD's are still up because there is not enough
down mon_osd_min_down_reporters and due to this problem MDS is stucking.
The solution is "mon_osd_min_down_reporters = 1"
Due to "two node" cluster and "replicated 2" with "chooseleaf host"
the reporter count should be set
On 15.06.21 15:16, nORKy wrote:
> Why is there no failover ??
Because only one MON out of two is not in the majority to build a quorum.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19