>>> Digimer schrieb am 07.12.2015 um 22:40 in Nachricht
<5665fcdc.1030...@alteeve.ca>:
[...]
> Node 1 looks up how to fence node 2, sees no delay and fences
> immediately. Node 2 looks up how to fence node 1, sees a delay and
> pauses. Node 2 will be dead long before the delay
Hi,
i've been asking all around here a while ago. Unfortunately I couldn't
continue to work on my cluster, so I'm still thinking about the design.
I hope you will help me again with some recommendations, because when the
cluster is running changing of the design is not possible anymore.
These
Hi Ken,
The comments are in the text.
On 4.12.2015 19:06, Ken Gaillot wrote:
On 12/04/2015 10:22 AM, Klechomir wrote:
Hi list,
My issue is the following:
I have very stable cluster, using Corosync 2.1.0.26 and Pacemaker 1.1.8
(observed the same problem with Corosync 2.3.5 & Pacemaker
Hi!
I wonder: It seems it does the same thing as the RA I wrote some years ago:
(crm ra info ISC-cron)
OCF Resource Agent managing crontabs for ISC cron (ocf:xola:ISC-cron)
OCF Resource Agent managing crontabs for ISC cron
This RA manages crontabs for the ISC cron daemon by managing links to
Hi,
Sorry didn't get your point.
The xml of the VM is on a active-active drbd drive with ocfs2 fs on it
and is visible from both nodes.
The live migration is always successful.
On 4.12.2015 19:30, emmanuel segura wrote:
I think the xml of your vm need to available on both nodes, but your
Hi!
A few comments (Build.PL):
The part beginning at
---
$ocf_dirs = qx{
. "$lib_ocf_dirs" 2> /dev/null
echo "\$INITDIR"
...
---
is somewhat complicated. Why not do something like
---
$ocf_dirs = qx{
. "$lib_ocf_dirs" 2> /dev/null
echo "INITDIR=\$INITDIR"
...
---
and then parse the
>>> Ken Gaillot schrieb am 20.11.2015 um 16:06 in
>>> Nachricht
<564f36e2.90...@redhat.com>:
[...]
>> location cli-prefer-collectd collectd inf: host-1
>> location cli-prefer-failover-ip1 failover-ip1 inf: host-1
>> location cli-prefer-failover-ip2 failover-ip2 inf: host-1
On 12/07/2015 09:57 PM, Michael Schwartzkopff wrote:
> Hi,
>
> it is possible / advisable to set up a multisite cluster with booth with one
> server at each site?
>
> So having three servers all together?
Yes. No. Might be.
^L
The concept of geo cluster is cluster of cluster, where the local
On 12/07/2015 10:30 PM, Steven Jones wrote:
> You need to be able to form a quorum to keep writes going in case of a split
> brain scenario, if that is important to you and you can/prepared to pay for
> it, if its data protection only then a fail safe to read only is all you need.
>
> So 3
Digimer wrote:
>
> On 07/12/15 12:35 PM, Lentes, Bernd wrote:
> > Hi,
> >
> > i've been asking all around here a while ago. Unfortunately I couldn't
> > continue to work on my cluster, so I'm still thinking about the
design.
> > I hope you will help me again with some recommendations, because
>
Hi,
it is possible / advisable to set up a multisite cluster with booth with one
server at each site?
So having three servers all together?
Mit freundlichen Grüßen,
Michael Schwartzkopff
--
[*] sys4 AG
http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
Franziskanerstraße 15, 81669
Hi,
My 2 cents,
You need to be able to form a quorum to keep writes going in case of a split
brain scenario, if that is important to you and you can/prepared to pay for it,
if its data protection only then a fail safe to read only is all you need.
So 3 machines a quorum is 2, 15 machines the
On 07/12/15 03:27 PM, Lentes, Bernd wrote:
> Digimer wrote:
>>
>> On 07/12/15 12:35 PM, Lentes, Bernd wrote:
>>> Hi,
>>>
>>> i've been asking all around here a while ago. Unfortunately I couldn't
>>> continue to work on my cluster, so I'm still thinking about the
> design.
>>> I hope you will help
On 07/12/15 12:35 PM, Lentes, Bernd wrote:
> Hi,
>
> i've been asking all around here a while ago. Unfortunately I couldn't
> continue to work on my cluster, so I'm still thinking about the design.
> I hope you will help me again with some recommendations, because when the
> cluster is running
14 matches
Mail list logo