On Fri, Mar 15, 2013 at 8:49 PM, emmanuel segura emi2f...@gmail.com wrote:
Hello Fedrik
Why you have a clone of cl_exportfs_root and you have ext4 filesystem, and
i think this order is not correct
order o_drbd_before_nfs inf: ms_drbd_nfs:promote g_nfs:start
order o_root_before_nfs inf:
On Thu, Mar 14, 2013 at 9:47 PM, Fredrik Hudner fredrik.hud...@evry.com wrote:
Hi all,
I have a problem after I removed a node with the force command from my crm
config.
Originally I had 2 nodes running HA cluster (corosync 1.4.1-7.el6, pacemaker
1.1.7-6.el6)
Then I wanted to add a third
Hi all,
I have a problem after I removed a node with the force command from my crm
config.
Originally I had 2 nodes running HA cluster (corosync 1.4.1-7.el6, pacemaker
1.1.7-6.el6)
Then I wanted to add a third node acting as quorum node, but was not able to
get it to work (probably because I
Hello Fedrik
Why you have a clone of cl_exportfs_root and you have ext4 filesystem, and
i think this order is not correct
order o_drbd_before_nfs inf: ms_drbd_nfs:promote g_nfs:start
order o_root_before_nfs inf: cl_exportfs_root g_nfs:start
I think like that you try to start g_nfs twice
Hi all,
I have a problem after I removed a node with the force command from my crm
config.
Originally I had 2 nodes running HA cluster (corosync 1.4.1-7.el6,
pacemaker 1.1.7-6.el6)
Then I wanted to add a third node acting as quorum node, but was not able
to get it to work (probably because I
On 2013-03-14 13:30, Fredrik Hudner wrote:
Hi all,
I have a problem after I removed a node with the force command from my crm
config.
Originally I had 2 nodes running HA cluster (corosync 1.4.1-7.el6,
pacemaker 1.1.7-6.el6)
Then I wanted to add a third node acting as quorum node,
I set no-quorum-policy to ignore and removed the constraint you mentioned.
It then managed to failover once to the slave node, but I still have those.
Failed actions:
p_exportfs_root:0_monitor_
3 (node=testclu01, call=12, rc=7,
status=complete): not running
On 2013-03-14 15:52, Fredrik Hudner wrote:
I set no-quorum-policy to ignore and removed the constraint you mentioned.
It then managed to failover once to the slave node, but I still have those.
Failed actions:
p_exportfs_root:0_monitor_
3 (node=testclu01, call=12, rc=7,