I was led to believe for a hot cluster (no stonith using drbd backed nfs
resources) that the best way to ensure failover was via a quorem with a third
(resource hosting or not ) server. I had issues with getting a robust fail over
and am moving on to a 3 node drbd9 backed cluster.
Sent from
On 2014-02-22T13:49:40, JR wrote:
> I've been told by folks on the linux-ha IRC that fencing is my answer
> and I've put in place the null fence client. I understand that this is
> not what I'd want in production, but for my testing it seems to be the
> correct way to test a cluster. I've confi
Greetings,
I have a 2 node test cluster. It exposes a single resource, an NFS
server which exports a single directory. I'm able to do:
crm resource move
and that works but if I do:
pkill -9 'corosync|pacemaker'
the resource doesn't migrate.
I've been told by folks on the linux-ha IRC that f
On 2014-02-22T12:35:42, ml ml wrote:
> Hello List,
>
> i have a two node Cluster with Debian 7 and this configuation:
>
> node proxy01-example.net
> node proxy02-example.net
> primitive login.example.net ocf:heartbeat:Xen \
> params xmfile="/etc/xen/login.example.net.cfg" \
> op monitor
Hello List,
i have a two node Cluster with Debian 7 and this configuation:
node proxy01-example.net
node proxy02-example.net
primitive login.example.net ocf:heartbeat:Xen \
params xmfile="/etc/xen/login.example.net.cfg" \
op monitor interval="30s" timeout="600" \
op start interval="0"