Re: [Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-18 Thread Alexis de BRUYN

On 12.06.2014 22:44, Lars Ellenberg wrote:
> On Mon, Jun 09, 2014 at 08:07:51PM +0200, Alexis de BRUYN wrote:
>> Hi Everybody,
>>
>> I have an issue with a 2-node Debian Wheezy primary/primary DRBD
>> Pacemaker/Corosync configuration.
>>
>> After a 'crm node standby' then a 'crm node online', the DRBD volume
>> stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).
>>
>> A soft or hard reboot of one node gets rid of the split brain and/or
>> doesn't create one.
>>
>> I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
>> as simple as possible (no activity and no filesystem on the DRBD volume).
>>
>> I don't see what I am doing wrong. Could anybody help me with this please.
> 
> Use fencing, both node-level fencing on the Pacemaker level,
> *and* constraint fencing on the DRBD level:
Thanks Lars, it is working fine now.

> 
>> # cat /etc/drbd.d/sda4.res
>> resource sda4 {
>>  device /dev/drbd0;
>>  disk /dev/sda4;
>>  meta-disk internal;
>>
>>   startup {
>> become-primary-on both;
>>   }
>>
>>   handlers {
>> split-brain "/usr/lib/drbd/notify-split-brain.sh root";
> 
>  fence-peer crm-fence-peer.sh;
>  after-resync-target crm-unfence-peer.sh;
> 
>>   }
> 
>   disk {
>fencing resource-and-stonith;
>}
> 
>>
>>   net {
>> allow-two-primaries;
>> after-sb-0pri discard-zero-changes;
>> after-sb-1pri discard-secondary;
>> after-sb-2pri disconnect;
>>   }
>>  on testvm1 {
>>   address 192.168.1.201:7788;
>>  }
>>  on testvm2 {
>>   address 192.168.1.202:7788;
>>  }
>>
>>  syncer {
>>   rate 100M;
>>   al-extents 3389;
>>  }
>> }

-- 
Alexis de BRUYN

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-12 Thread Lars Ellenberg
On Mon, Jun 09, 2014 at 08:07:51PM +0200, Alexis de BRUYN wrote:
> Hi Everybody,
> 
> I have an issue with a 2-node Debian Wheezy primary/primary DRBD
> Pacemaker/Corosync configuration.
> 
> After a 'crm node standby' then a 'crm node online', the DRBD volume
> stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).
> 
> A soft or hard reboot of one node gets rid of the split brain and/or
> doesn't create one.
> 
> I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
> as simple as possible (no activity and no filesystem on the DRBD volume).
> 
> I don't see what I am doing wrong. Could anybody help me with this please.

Use fencing, both node-level fencing on the Pacemaker level,
*and* constraint fencing on the DRBD level:

> # cat /etc/drbd.d/sda4.res
> resource sda4 {
>  device /dev/drbd0;
>  disk /dev/sda4;
>  meta-disk internal;
> 
>   startup {
> become-primary-on both;
>   }
> 
>   handlers {
> split-brain "/usr/lib/drbd/notify-split-brain.sh root";

 fence-peer crm-fence-peer.sh;
 after-resync-target crm-unfence-peer.sh;

>   }

disk {
 fencing resource-and-stonith;
 }

> 
>   net {
> allow-two-primaries;
> after-sb-0pri discard-zero-changes;
> after-sb-1pri discard-secondary;
> after-sb-2pri disconnect;
>   }
>  on testvm1 {
>   address 192.168.1.201:7788;
>  }
>  on testvm2 {
>   address 192.168.1.202:7788;
>  }
> 
>  syncer {
>   rate 100M;
>   al-extents 3389;
>  }
> }
-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-11 Thread Andrew Beekhof

On 12 Jun 2014, at 12:13 am, Alexis de BRUYN  
wrote:

> On 10.06.2014 01:44, Andrew Beekhof wrote:
>> 
>> On 10 Jun 2014, at 4:07 am, Alexis de BRUYN  
>> wrote:
>> 
>>> Hi Everybody,
>>> 
>>> I have an issue with a 2-node Debian Wheezy primary/primary DRBD
>>> Pacemaker/Corosync configuration.
>>> 
>>> After a 'crm node standby' then a 'crm node online', the DRBD volume
>>> stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).
>>> 
>>> A soft or hard reboot of one node gets rid of the split brain and/or
>>> doesn't create one.
>>> 
>>> I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
>>> as simple as possible (no activity and no filesystem on the DRBD volume).
>>> 
>>> I don't see what I am doing wrong. Could anybody help me with this please.
>> 
>> There could be a pacemaker bug.  
>> Master/slave resources are quite complex internally and have received many 
>> improvements in the years since 1.1.7.
>> So simply upgrading pacemaker could be the answer.
> 
> Hi Andrew,
> 
> I have followed your advice and updated Pacemaker/Corosync by installing
> a fresh Debian Sid but I still have the issue with the following packages:

I don't know exactly what went into those packages and there have been more 
fixes (aren't there always :-/) since 1.1.10, but it is certainly recent enough 
to deserve a closer look.

Could you run crm_report for the period covered by your test? (No need to 
reproduce, just tell crm_report when you did the test and it will create a 
tarball for you to attach here).

> 
> # uname -a
> Linux testvm1 3.13-1-amd64 #1 SMP Debian 3.13.10-1 (2014-04-15) x86_64
> GNU/Linux
> 
> # cat /etc/issue && dpkg -l | egrep "corosync|pacemaker|drbd"
> Debian GNU/Linux jessie/sid \n \l
> 
> ii  corosync   1.4.6-1 amd64
>Standards-based cluster framework (daemon and modules)
> ii  crmsh  1.2.6+git+e77add-1.2amd64
>CRM shell for the pacemaker cluster manager
> ii  drbd8-utils2:8.4.4-1   amd64
>RAID 1 over TCP/IP for Linux (user utilities)
> ii  pacemaker  1.1.10+git20130802-4amd64
>HA cluster resource manager
> ii  pacemaker-cli-utils1.1.10+git20130802-4amd64
>Command line interface utilities for Pacemaker
> 
> And with the "experimental" packages, I cannot connect to the cluster
> via crmsh too:
> 
> # cat /etc/issue && dpkg -l | egrep "corosync|pacemaker|drbd"
> Debian GNU/Linux jessie/sid \n \l
> 
> ii  corosync   2.3.3-1 amd64
>Standards-based cluster framework (daemon and modules)
> ii  crmsh  1.2.6+git+e77add-1.2amd64
>CRM shell for the pacemaker cluster manager
> ii  drbd8-utils2:8.4.4-1   amd64
>RAID 1 over TCP/IP for Linux (user utilities)
> ii  libcorosync-common42.3.3-1 amd64
>Standards-based cluster framework, common library
> ii  pacemaker  1.1.11-1amd64
>HA cluster resource manager
> ii  pacemaker-cli-utils1.1.11-1amd64
>Command line interface utilities for Pacemaker
> 
> I will try to build last versions of Pacemaker/Corosync on a Debian
> Wheezy before reporting my issue via Bugzilla.
> 
> Thanks for your help.
> 
> 
> -- 
> Alexis de BRUYN
> 
> ___
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-11 Thread Alexis de BRUYN

On 10.06.2014 05:28, Digimer wrote:
> On 09/06/14 07:44 PM, Andrew Beekhof wrote:
>>
>> On 10 Jun 2014, at 4:07 am, Alexis de BRUYN
>>  wrote:
>>
>>> Hi Everybody,
>>>
>>> I have an issue with a 2-node Debian Wheezy primary/primary DRBD
>>> Pacemaker/Corosync configuration.
>>>
>>> After a 'crm node standby' then a 'crm node online', the DRBD volume
>>> stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).
>>>
>>> A soft or hard reboot of one node gets rid of the split brain and/or
>>> doesn't create one.
>>>
>>> I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
>>> as simple as possible (no activity and no filesystem on the DRBD
>>> volume).
>>>
>>> I don't see what I am doing wrong. Could anybody help me with this
>>> please.
>>
>> There could be a pacemaker bug.
>> Master/slave resources are quite complex internally and have received
>> many improvements in the years since 1.1.7.
>> So simply upgrading pacemaker could be the answer.
> 
> In addition, setup/test stonith in pacemaker, then hook DRBD's fencing
> into pacemaker (set 'fencing resource-and-stonith;' and 'fence-handler
> /path/to/crm-fence-peer.sh). This way, if DRBD is about to split-brain,
> it will instead block and call a fence, and stay blocked until the fence
> succeeds. It will only resume when the peer is in a known state (off),
> thus avoiding split-brains entirely.
Thanks Digimer for your suggestion, but unfornately I don't have ipmi
hardware on my tests machines right now.

> 
> And, and Andrew said, upgrade pacemaker. :)
> 

-- 
Alexis de BRUYN

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-11 Thread Alexis de BRUYN
On 10.06.2014 01:44, Andrew Beekhof wrote:
> 
> On 10 Jun 2014, at 4:07 am, Alexis de BRUYN  
> wrote:
> 
>> Hi Everybody,
>>
>> I have an issue with a 2-node Debian Wheezy primary/primary DRBD
>> Pacemaker/Corosync configuration.
>>
>> After a 'crm node standby' then a 'crm node online', the DRBD volume
>> stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).
>>
>> A soft or hard reboot of one node gets rid of the split brain and/or
>> doesn't create one.
>>
>> I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
>> as simple as possible (no activity and no filesystem on the DRBD volume).
>>
>> I don't see what I am doing wrong. Could anybody help me with this please.
> 
> There could be a pacemaker bug.  
> Master/slave resources are quite complex internally and have received many 
> improvements in the years since 1.1.7.
> So simply upgrading pacemaker could be the answer.

Hi Andrew,

I have followed your advice and updated Pacemaker/Corosync by installing
a fresh Debian Sid but I still have the issue with the following packages:

# uname -a
Linux testvm1 3.13-1-amd64 #1 SMP Debian 3.13.10-1 (2014-04-15) x86_64
GNU/Linux

# cat /etc/issue && dpkg -l | egrep "corosync|pacemaker|drbd"
Debian GNU/Linux jessie/sid \n \l

ii  corosync   1.4.6-1 amd64
Standards-based cluster framework (daemon and modules)
ii  crmsh  1.2.6+git+e77add-1.2amd64
CRM shell for the pacemaker cluster manager
ii  drbd8-utils2:8.4.4-1   amd64
RAID 1 over TCP/IP for Linux (user utilities)
ii  pacemaker  1.1.10+git20130802-4amd64
HA cluster resource manager
ii  pacemaker-cli-utils1.1.10+git20130802-4amd64
Command line interface utilities for Pacemaker

And with the "experimental" packages, I cannot connect to the cluster
via crmsh too:

# cat /etc/issue && dpkg -l | egrep "corosync|pacemaker|drbd"
Debian GNU/Linux jessie/sid \n \l

ii  corosync   2.3.3-1 amd64
Standards-based cluster framework (daemon and modules)
ii  crmsh  1.2.6+git+e77add-1.2amd64
CRM shell for the pacemaker cluster manager
ii  drbd8-utils2:8.4.4-1   amd64
RAID 1 over TCP/IP for Linux (user utilities)
ii  libcorosync-common42.3.3-1 amd64
Standards-based cluster framework, common library
ii  pacemaker  1.1.11-1amd64
HA cluster resource manager
ii  pacemaker-cli-utils1.1.11-1amd64
Command line interface utilities for Pacemaker

I will try to build last versions of Pacemaker/Corosync on a Debian
Wheezy before reporting my issue via Bugzilla.

Thanks for your help.


-- 
Alexis de BRUYN

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-09 Thread Digimer

On 09/06/14 07:44 PM, Andrew Beekhof wrote:


On 10 Jun 2014, at 4:07 am, Alexis de BRUYN  
wrote:


Hi Everybody,

I have an issue with a 2-node Debian Wheezy primary/primary DRBD
Pacemaker/Corosync configuration.

After a 'crm node standby' then a 'crm node online', the DRBD volume
stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).

A soft or hard reboot of one node gets rid of the split brain and/or
doesn't create one.

I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
as simple as possible (no activity and no filesystem on the DRBD volume).

I don't see what I am doing wrong. Could anybody help me with this please.


There could be a pacemaker bug.
Master/slave resources are quite complex internally and have received many 
improvements in the years since 1.1.7.
So simply upgrading pacemaker could be the answer.


In addition, setup/test stonith in pacemaker, then hook DRBD's fencing 
into pacemaker (set 'fencing resource-and-stonith;' and 'fence-handler 
/path/to/crm-fence-peer.sh). This way, if DRBD is about to split-brain, 
it will instead block and call a fence, and stay blocked until the fence 
succeeds. It will only resume when the peer is in a known state (off), 
thus avoiding split-brains entirely.


And, and Andrew said, upgrade pacemaker. :)

--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without 
access to education?


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-09 Thread Andrew Beekhof

On 10 Jun 2014, at 4:07 am, Alexis de BRUYN  
wrote:

> Hi Everybody,
> 
> I have an issue with a 2-node Debian Wheezy primary/primary DRBD
> Pacemaker/Corosync configuration.
> 
> After a 'crm node standby' then a 'crm node online', the DRBD volume
> stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).
> 
> A soft or hard reboot of one node gets rid of the split brain and/or
> doesn't create one.
> 
> I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
> as simple as possible (no activity and no filesystem on the DRBD volume).
> 
> I don't see what I am doing wrong. Could anybody help me with this please.

There could be a pacemaker bug.  
Master/slave resources are quite complex internally and have received many 
improvements in the years since 1.1.7.
So simply upgrading pacemaker could be the answer.

> 
> Regards,
> 
> Alexis.
> 
> Here are my config and log files:
> 
> # cat /etc/issue
> Debian GNU/Linux 7 \n \l
> 
> # uname -a
> Linux testvm2 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3+deb7u2 x86_64 GNU/Linux
> 
> # dpkg -l | grep corosync
> ii  corosync1.4.2-3   amd64
>   Standards-based cluster framework (daemon and modules)
> 
> # dpkg -l | grep pacemaker
> ii  pacemaker   1.1.7-1   amd64
>   HA cluster resource manager
> 
> # dpkg -l | grep drbd
> ii  drbd8-utils 2:8.3.13-2amd64
>   RAID 1 over tcp/ip for Linux utilities
> 
> # cat /etc/drbd.d/sda4.res
> resource sda4 {
> device /dev/drbd0;
> disk /dev/sda4;
> meta-disk internal;
> 
>  startup {
>become-primary-on both;
>  }
> 
>  handlers {
>split-brain "/usr/lib/drbd/notify-split-brain.sh root";
>  }
> 
>  net {
>allow-two-primaries;
>after-sb-0pri discard-zero-changes;
>after-sb-1pri discard-secondary;
>after-sb-2pri disconnect;
>  }
> on testvm1 {
>  address 192.168.1.201:7788;
> }
> on testvm2 {
>  address 192.168.1.202:7788;
> }
> 
> syncer {
>  rate 100M;
>  al-extents 3389;
> }
> }
> 
> # crm configure show
> node testvm1
> node testvm2 \
>   attributes standby="off"
> primitive prim-DRBD-data ocf:linbit:drbd \
>   params drbd_resource="sda4" \
>   operations $id="operations-DRBD-sda4" \
>   op monitor interval="10" role="Master" timeout="20" \
>   op monitor interval="20" role="Slave" timeout="20" \
>   op start interval="0" timeout="240s" \
>   op stop interval="0" timeout="100s"
> ms ms-DRBD-data prim-DRBD-data \
>   meta master-max="2" clone-max="2" notify="true" target-role="Master"
> property $id="cib-bootstrap-options" \
>   dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
>   cluster-infrastructure="openais" \
>   expected-quorum-votes="2" \
>   no-quorum-policy="ignore" \
>   stonith-enabled="false" \
>   default-resource-stickiness="1000"
> 
> # crm status
> 
> Last updated: Mon Jun  9 19:23:08 2014
> Last change: Mon Jun  9 19:15:50 2014 via cibadmin on testvm2
> Stack: openais
> Current DC: testvm1 - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 2 Nodes configured, 2 expected votes
> 2 Resources configured.
> 
> 
> Online: [ testvm1 testvm2 ]
> 
> Master/Slave Set: ms-DRBD-data [prim-DRBD-data]
> Masters: [ testvm1 testvm2 ]
> 
> 
> # cat /proc/drbd
> version: 8.3.11 (api:88/proto:86-96)
> srcversion: F937DCB2E5D83C6CCE4A6C9
> 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-
>ns:0 nr:0 dw:0 dr:88 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
> 
> # crm node standby testvm2
> 
> # cat /proc/drbd
> version: 8.3.11 (api:88/proto:86-96)
> srcversion: F937DCB2E5D83C6CCE4A6C9
> 0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-
>ns:0 nr:0 dw:0 dr:88 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
> 
> # crm node online testvm2
> 
> # cat /proc/drbd
> version: 8.3.11 (api:88/proto:86-96)
> srcversion: F937DCB2E5D83C6CCE4A6C9
> 0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-
>ns:0 nr:0 dw:0 dr:88 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
> 
> # crm status
> 
> Last updated: Mon Jun  9 19:56:07 2014
> Last change: Mon Jun  9 19:43:24 2014 via crm_attribute on testvm2
> Stack: openais
> Current DC: testvm2 - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 2 Nodes configured, 2 expected votes
> 2 Resources configured.
> 
> 
> Online: [ testvm1 testvm2 ]
> 
> Master/Slave Set: ms-DRBD-data [prim-DRBD-data]
> Masters: [ testvm1 testvm2 ]
> 
> 
> # cat /var/log/daemon.log
> Jun  9 19:37:03 testvm2 cib: [2168]: info: cib_stats: Processed 178
> operations (393.00us average, 0% utilization) in the last 10min
> Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -  admin_epoch="0" epoch="23" num_updates="65" >
> Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -   
> Jun  9 19:42:06 testvm2 cib: [2168

[Pacemaker] DRBD primary/primary + Pacemaker goes into split brain after crm node standby/online

2014-06-09 Thread Alexis de BRUYN
Hi Everybody,

I have an issue with a 2-node Debian Wheezy primary/primary DRBD
Pacemaker/Corosync configuration.

After a 'crm node standby' then a 'crm node online', the DRBD volume
stays in a 'split brain state' (cs:StandAlone ro:Primary/Unknown).

A soft or hard reboot of one node gets rid of the split brain and/or
doesn't create one.

I have followed http://www.drbd.org/users-guide-8.3/ and keep my tests
as simple as possible (no activity and no filesystem on the DRBD volume).

I don't see what I am doing wrong. Could anybody help me with this please.

Regards,

Alexis.

Here are my config and log files:

# cat /etc/issue
Debian GNU/Linux 7 \n \l

# uname -a
Linux testvm2 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3+deb7u2 x86_64 GNU/Linux

# dpkg -l | grep corosync
ii  corosync1.4.2-3   amd64
   Standards-based cluster framework (daemon and modules)

# dpkg -l | grep pacemaker
ii  pacemaker   1.1.7-1   amd64
   HA cluster resource manager

# dpkg -l | grep drbd
ii  drbd8-utils 2:8.3.13-2amd64
   RAID 1 over tcp/ip for Linux utilities

# cat /etc/drbd.d/sda4.res
resource sda4 {
 device /dev/drbd0;
 disk /dev/sda4;
 meta-disk internal;

  startup {
become-primary-on both;
  }

  handlers {
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
  }

  net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
  }
 on testvm1 {
  address 192.168.1.201:7788;
 }
 on testvm2 {
  address 192.168.1.202:7788;
 }

 syncer {
  rate 100M;
  al-extents 3389;
 }
}

# crm configure show
node testvm1
node testvm2 \
attributes standby="off"
primitive prim-DRBD-data ocf:linbit:drbd \
params drbd_resource="sda4" \
operations $id="operations-DRBD-sda4" \
op monitor interval="10" role="Master" timeout="20" \
op monitor interval="20" role="Slave" timeout="20" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s"
ms ms-DRBD-data prim-DRBD-data \
meta master-max="2" clone-max="2" notify="true" target-role="Master"
property $id="cib-bootstrap-options" \
dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false" \
default-resource-stickiness="1000"

# crm status

Last updated: Mon Jun  9 19:23:08 2014
Last change: Mon Jun  9 19:15:50 2014 via cibadmin on testvm2
Stack: openais
Current DC: testvm1 - partition with quorum
Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
2 Nodes configured, 2 expected votes
2 Resources configured.


Online: [ testvm1 testvm2 ]

 Master/Slave Set: ms-DRBD-data [prim-DRBD-data]
 Masters: [ testvm1 testvm2 ]


# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
srcversion: F937DCB2E5D83C6CCE4A6C9
 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-
ns:0 nr:0 dw:0 dr:88 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

# crm node standby testvm2

# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
srcversion: F937DCB2E5D83C6CCE4A6C9
 0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-
ns:0 nr:0 dw:0 dr:88 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

# crm node online testvm2

# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
srcversion: F937DCB2E5D83C6CCE4A6C9
 0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown   r-
ns:0 nr:0 dw:0 dr:88 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

# crm status

Last updated: Mon Jun  9 19:56:07 2014
Last change: Mon Jun  9 19:43:24 2014 via crm_attribute on testvm2
Stack: openais
Current DC: testvm2 - partition with quorum
Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
2 Nodes configured, 2 expected votes
2 Resources configured.


Online: [ testvm1 testvm2 ]

 Master/Slave Set: ms-DRBD-data [prim-DRBD-data]
 Masters: [ testvm1 testvm2 ]


# cat /var/log/daemon.log
Jun  9 19:37:03 testvm2 cib: [2168]: info: cib_stats: Processed 178
operations (393.00us average, 0% utilization) in the last 10min
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: - 
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -   
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: - 
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -   
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -

Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -   
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -

Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -   
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: - 
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: -   
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib:diff: - 
Jun  9 19:42:06 testvm2 cib: [2168]: info: cib: