11.10.2016 17:40, Ken Gaillot:
On 10/11/2016 07:06 AM, Pavel Levshin wrote:
Hi!
In continuation of prevoius mails, now I have more complex setup. Our
hardware are capable of two STONITH methods: ILO and SCSI persistent
reservations on shared storage. First method works fine, nevertheless,
On 11/10/16 12:07, Dennis Jacobfeuerborn wrote:
> On 11.10.2016 12:42, Christine Caulfield wrote:
>> I've just committed a bit patch to the master branch of corosync - it is
>> now all very experimental, and existing pull requests against master
>> might need to be checked. This starts the work on
Hi!
In continuation of prevoius mails, now I have more complex setup. Our
hardware are capable of two STONITH methods: ILO and SCSI persistent
reservations on shared storage. First method works fine, nevertheless,
sometimes in the past we faced problems with inaccessible ILO devices or
On 11.10.2016 12:42, Christine Caulfield wrote:
> I've just committed a bit patch to the master branch of corosync - it is
> now all very experimental, and existing pull requests against master
> might need to be checked. This starts the work on what will hopefully
> become corosync 3.0
>
> The
I've just committed a bit patch to the master branch of corosync - it is
now all very experimental, and existing pull requests against master
might need to be checked. This starts the work on what will hopefully
become corosync 3.0
The commit is to make Kronosnet the new, default, transport for
Hi Klaus,
Thank you for comment.
I make the patch which is prototype using WD service.
Please wait a little.
Best Regards,
Hideo Yamauchi.
- Original Message -
> From: Klaus Wenninger
> To: users@clusterlabs.org
> Cc:
> Date: 2016/10/10, Mon 21:03
> Subject:
On 11/10/16 08:22, Vladislav Bogdanov wrote:
> 11.10.2016 09:31, Ulrich Windl wrote:
> Klaus Wenninger schrieb am 10.10.2016 um
> 20:04 in
>> Nachricht <936e4d4b-df5c-246d-4552-5678653b3...@redhat.com>:
>>> On 10/10/2016 06:58 PM, Eric Robinson wrote:
Thanks for
On 10/10/16 19:35, Eric Robinson wrote:
> Basically, when we turn off a switch, I want to keep the cluster from failing
> over before Linux bonding has had a chance to recover.
>
> I'm mostly interested in prventing false-positive cluster failovers that
> might occur during manual network
11.10.2016 09:31, Ulrich Windl wrote:
Klaus Wenninger schrieb am 10.10.2016 um
20:04 in
Nachricht <936e4d4b-df5c-246d-4552-5678653b3...@redhat.com>:
On 10/10/2016 06:58 PM, Eric Robinson wrote:
Thanks for the clarification. So what's the easiest way to ensure
that the
>>> Klaus Wenninger schrieb am 10.10.2016 um 20:42 in
Nachricht <0713ae34-7606-a82b-47f8-5cc64bfca...@redhat.com>:
> On 10/10/2016 08:35 PM, Eric Robinson wrote:
>> Basically, when we turn off a switch, I want to keep the cluster from
> failing over before Linux bonding has
>>> Chad Cravens schrieb am 10.10.2016 um 19:18 in
Nachricht
:
> The client has specifically said that they purposefully chosen not to
> implement Oracle RAC (I'm not sure why). It's a large program and
On Tue, Oct 11, 2016 at 9:18 AM, Ulrich Windl
wrote:
>
> My point is this: For a resource that can only exclusively run on one node,
> it's important that the other node is down before taking action. But for cLVM
> and OCFS2 the resources can run concurrently
{ emmanuel segura schrieb am 10.10.2016 um 16:49 in
> Nachricht
>
13 matches
Mail list logo