On 6/13/12 3:04 PM, Arnold Krille wrote:
> On 13.06.2012 17:56, William Seligman wrote:
>> A data point:
>>
>> On my cluster, I have two dedicated direct-link cables between the two nodes,
>> one for DRBD traffic, the other for corosync/pacemaker traffic. Roughly once 
>> per
>> week, I get a "link down" messages on one of the nodes:
> 
> A) use several communication-rings in corosync. We use one on the
> regular user-network and a second on the storage-network. One fails, no
> problem, corosync doesn't sense a need to fence something.

I have a cman+pacemaker configuration. I tried to set up rrp, but it never
worked; unfortunately, I didn't save any error messages or versions of
cluster.conf to share on this group. I didn't think much of it; I'm using
RHEL6.2, and cman+rrp support was described as "experimental".

> B) use bonded/bridged interfaces for the storage-connection. We
> currently have our storage-network aka vlan17 as vlan on eth0 of all the
> servers and untagged on eth1, using a bond with active-backup mode where
> eth1 is the primary and vlan17 the backup.

This is an intriguing idea. I haven't played with bonded links before, much less
with one link on a VLAN. I'll do some research and see what it would take it set
it up. Thanks for the idea!
-- 
Bill Seligman             | Phone: (914) 591-2823
Nevis Labs, Columbia Univ | mailto://selig...@nevis.columbia.edu
PO Box 137                |
Irvington NY 10533 USA    | http://www.nevis.columbia.edu/~seligman/

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to