On Thu, 2019-10-10 at 17:22 +0200, Lentes, Bernd wrote:
> HI,
>
> i have a two node cluster running on SLES 12 SP4.
> I did some testing on it.
> I put one into standby (ha-idg-2), the other (ha-idg-1) got fenced a
> few minutes later because i made a mistake.
> ha-idg-2 was DC. ha-idg-1 made a
On Wed, 2019-10-09 at 16:53 +0200, Lentes, Bernd wrote:
> Hi,
>
> i finally managed to find out how i can simulate configuration
> changes and see their results before committing them.
> OMG. That makes live much more relaxed. I need to change the
> configuration of a resource which is part of a
On Wed, 2019-10-09 at 20:10 +0200, Kadlecsik József wrote:
> On Wed, 9 Oct 2019, Ken Gaillot wrote:
>
> > > One of the nodes has got a failure ("watchdog: BUG: soft lockup
> > > -
> > > CPU#7 stuck for 23s"), which resulted that the node could
> > > process
> > > traffic on the backend
10.10.2019 18:22, Lentes, Bernd пишет:
> HI,
>
> i have a two node cluster running on SLES 12 SP4.
> I did some testing on it.
> I put one into standby (ha-idg-2), the other (ha-idg-1) got fenced a few
> minutes later because i made a mistake.
> ha-idg-2 was DC. ha-idg-1 made a fresh boot and i
HI,
i have a two node cluster running on SLES 12 SP4.
I did some testing on it.
I put one into standby (ha-idg-2), the other (ha-idg-1) got fenced a few
minutes later because i made a mistake.
ha-idg-2 was DC. ha-idg-1 made a fresh boot and i started corosync/pacemaker on
it.
It seems ha-idg-1
Hi Team,
Can you provide source code for cman, So we can go-ahead and use CMAN as stack,
Thanks and Regards,
S Sathish S
On Mon, 2019-10-07 at 13:34 +, S Sathish S wrote:
> Hi Team,
>
> I have two below query , we have been using Rhel 6.5 OS Version with
> below clusterlab source
Hi!
Adding a parameter to a primitive that is part of a group, I noticed that "show
changed" in "configure" of crm shell does not only display the primitive, but
also the group, even though the group itself was not changed.
Is that a bug?
Regards,
Ulrich
>>> Andrei Borzenkov schrieb am 10.10.2019 um 11:05 in
Nachricht
:
> On Thu, Oct 10, 2019 at 11:16 AM Ulrich Windl
> wrote:
>>
>> Hi!
>>
>> In recent SLES there is "cluster MD", like in
> cluster‑md‑kmp‑default‑4.12.14‑197.18.1.x86_64
>
On 10/9/19 3:28 PM, Andrei Borzenkov wrote:
> What happens if both interconnect and shared device is lost by node? I
> assume node will reboot, correct?
>
From my understanding from Pacemaker integration feature in `man sbd`
Yes, sbd will do self-fence upon lose access to sbd disk when the
In addition to the admin guide, there are some more advanced articles
about the internals:
https://lwn.net/Articles/674085/
https://www.kernel.org/doc/Documentation/driver-api/md/md-cluster.rst
Cheers,
Roger
On 10/10/19 4:27 PM, Gang He wrote:
> Hello Ulrich
>
> Cluster MD belongs to SLE HA
On Thu, Oct 10, 2019 at 11:16 AM Ulrich Windl
wrote:
>
> Hi!
>
> In recent SLES there is "cluster MD", like in
> cluster-md-kmp-default-4.12.14-197.18.1.x86_64
> (/lib/modules/4.12.14-197.18-default/kernel/drivers/md/md-cluster.ko).
> However I could not find any manual page for it.
>
> Where
Hello Ulrich
Cluster MD belongs to SLE HA extension product.
The related doc link is here, e.g.
https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#cha-ha-cluster-md
Thanks
Gang
> -Original Message-
> From: Users [mailto:users-boun...@clusterlabs.org] On Behalf Of
On Wed, 9 Oct 2019, Digimer wrote:
> > One of the nodes has got a failure ("watchdog: BUG: soft lockup -
> > CPU#7 stuck for 23s"), which resulted that the node could process
> > traffic on the backend interface but not on the fronted one. Thus the
> > services became unavailable but the
Hi!
In recent SLES there is "cluster MD", like in
cluster-md-kmp-default-4.12.14-197.18.1.x86_64
(/lib/modules/4.12.14-197.18-default/kernel/drivers/md/md-cluster.ko). However
I could not find any manual page for it.
Where is the official documentation, meaning: Where is a description of the
14 matches
Mail list logo