On 09/15/2015 04:32 PM, Jorge Fábregas wrote:
> I have a situation where the watchdog provided by the hypervisor (z/VM)
> is not configurable (you can't change the heartbeat via the provided
> kernel module). SBD warms me about this and suggests the -T option (so
> it doesn't try to change it to
On 15/09/15 01:01, Digimer wrote:
> On 14/09/15 10:46 AM, Noel Kuntze wrote:
>>
>> Hello Christine,
>>
>> I googled a bit and some doc[1] says that TC_PRIO_INTERACTIVE maps to value
>> 6, whatever that is.
>> Assuming that value of 6 is the same as the "priority value", Corosync
>> traffic
>>> Noel Kuntze schrieb am 14.09.2015 um 17:46 in
Nachricht <55f6ebf0.2000...@familie-kuntze.de>:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hello Ullrich,
>
>
>> What totem does it detect network problems when there are none:
>>
>> # grep ringid.*FAULTY
On 15/09/15 12:10 PM, Noel Kuntze wrote:
>
> Hello Digimer,
>
>> So what's the final verdict on this? I followed your back and forth, and
>> it sounds like corosync uses 0, so nothing else is to be done?
>
> Missing prioritization itself cannot be the cause of the problem.
> Either some other
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hello Chrstine,
> There are other networking scheduling algorithms, I think. Though I
> haven't looked at them in detail for years now. Maybe we should
> investigate and see if there is one that might be more appropriate?
I'd propose recreating
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hello Ullrich,
> If you send a protocol from A to B where neither A's interface nor B's
> interface has any errors, and B reports a protocol error, the obvious
> conclusion is that the protocol is broken. Es pecially if the protocol claims
> to
Hi,
I've finished my tests with SBD on x86 (using the emulated 6300esb
watchdog provided by qemu) but now I'm doing final tests on the target
platform (s390x).
I have a situation where the watchdog provided by the hypervisor (z/VM)
is not configurable (you can't change the heartbeat via the
I should note this: AppY on Node A works only with AppZ on Node B, and AppY on
Node C works only with AppZ on Node D.(Some hardware restrictions)
Regards,
From: Michael Schwartzkopff
Am Dienstag, 15. September 2015, 13:38:59 schrieb H Yavari:
> Hi,
> Thanks for
On Tue, Sep 15, 2015 at 4:38 PM, wrote:
> Hi,
>
> Thanks for reply.
> The problem is Compute resource, the appY and appZ can't run on same Server.
>
> It is possible ?
>
Yes; set location constraint that appY cannot run on the same node as
appZ (and vice versa).
Hi,
Thanks a lot. But can you give me some hints about configuration?
Regards,
From: Andrei Borzenkov
To: hyav...@rocketmail.com; Cluster Labs - All topics related to open-source
clustering welcomed
On Tue, Sep 15, 2015 at 4:38 PM,
Am Dienstag, 15. September 2015, 13:38:59 schrieb H Yavari:
> Hi,
> Thanks for reply.
> The problem is Compute resource, the appY and appZ can't run on same Server.
> It is possible ?
> Regards,
As far as I understood:
You have the applications Y and Z and the servers A, B, C and D.
The
Am Dienstag, 15. September 2015, 13:57:36 schrieb H Yavari:
> Hi,
> Thanks a lot. But can you give me some hints about configuration?
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/
Mit freundlichen Grüßen,
Michael Schwartzkopff
--
[*] sys4 AG
http://sys4.de, +49
On 15/09/15 03:20 AM, Jan Friesse wrote:
> Digimer napsal(a):
>> On 14/09/15 04:20 AM, Jan Friesse wrote:
>>> Digimer napsal(a):
Hi all,
Starting a new thread from the "Clustered LVM with iptables issue"
thread...
I've decided to review how I do networking
On 15/09/15 14:22, H Yavari wrote:
Hi,
I'm newbie to pacemaker. So I don't know about all features.
My question : I have 4 servers, 2 servers are appY(active and standby)
and 2 servers are appZ (active and standby).
I want implement HA with Pacemaker, So I can do this for AppY and AppZ
14 matches
Mail list logo