Hi Yannis,

Thanks for the information you provided.

On pve1, I have initiate the cluster and add the node pve2.  When the
drbdctrl is primary on pve1 (secondary on pve2) and I shutdown the pve2,
the drbd storage is available.  I can do any manipulation and even the VM
is working.  But on the other side, if I shutdown pve1 (where drbdctrl is
primary), the drbd storage is not available on pve2.  Moreover, any
drbdmanage commands (lists-nodes, volumes etc..) does not work on pve2.  it
says:

root@pve2:~# drbdmanage list-nodes
Waiting for server: ...............
No nodes defined

The log goes as follows:
Mar 14 13:39:39 pve2 drbdmanaged[20776]: INFO       Leader election by wait
for connections
Mar 14 13:39:39 pve2 drbdmanaged[20776]: INFO       DrbdAdm: Running
external command: drbdsetup wait-connect-resource --wait-after-sb=yes
--wfc-timeout=2 .drbdctrl
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      DrbdAdm: External
command 'drbdsetup': Exit code 5
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
degr-wfc-timeout has to be shorter than wfc-timeout
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
degr-wfc-timeout implicitly set to wfc-timeout (2s)
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
outdated-wfc-timeout has to be shorter than degr-wfc-timeout
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
outdated-wfc-timeout implicitly set to degr-wfc-timeout (2s)
Mar 14 13:39:41 pve2 drbdmanaged[20776]: WARNING    Resource '.drbdctrl':
wait-connect-resource not finished within 2 seconds


Regarding the Split Brain issue, I can't find in the log that a split brain
situation has been detected on survival node ie pve2 for the moment.  I
have done a drbdmanage primary drbdctrl but still the drbd storage is not
available.  How can I resolve the split brain manually so as the drbd
storage continues to work even if pve1 (primary is down).

I will try to test the scenario by adding a third drbd node (pve3) to the
cluster (drbdmanage add-node command) on pve1 and I will let you know.

Thanks

Shafeek



On Mon, Mar 13, 2017 at 10:41 PM, Yannis Milios <yannis.mil...@gmail.com>
wrote:

> >the drdb storage becomes unavailable >and the drbd quorum is lost..
>
> From my experience using only 2 nodes on drbd9 does not work well, meaning
> that the cluster loose quorum and you have to manually troubleshoot the
> split brain.
> If you really need a stable system, then use 3 drbd nodes. You could
> possibly use the 3rd node as a drbd control node only ?? Just guessing...
>
> Yannis
> --
> Sent from Gmail Mobile
>



-- 
Shafeek SUMSER
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to