Re: [DRBD-user] Adding a second volume to a resource and making it UpToDate

2021-09-02 Thread Bryan K. Walton
On Wed, Sep 01, 2021 at 09:05:19AM +0200, Joel Colledge wrote: > > I meant just put Pacemaker in maintenance mode and do the rest > manually. So no "pcs cluster standby". Perhaps "pcs resource disable" > would be a better solution. In any case, you need to make sure DRBD is > not being used on eit

Re: [DRBD-user] Adding a second volume to a resource and making it UpToDate

2021-08-31 Thread Bryan K. Walton
On Tue, Aug 31, 2021 at 01:58:24PM +0200, Joel Colledge wrote: > > So, maybe this is a question for the Pacemaker mailing list, but what > > did I do wrong here? When adding a second volume to a resource, what is > > the proper way to change its state from Inconsistent to UpToDate? > > I would sa

[DRBD-user] Adding a second volume to a resource and making it UpToDate

2021-08-30 Thread Bryan K. Walton
We have a DRBD replication between two storage servers, using Protocol C. I needed to expand this and add a new DRBD device to our resource. The primary/secondary server state is controlled by Pacemaker. So, last Wednesday, I added a second volume to our DRBD resource. Then, I created the drbd met

[DRBD-user] Adding Second Replicated Disk to DRBD 9.0

2021-08-24 Thread Bryan K. Walton
I have a working DRBD system, with DRBD 9.0 where a block device (/dev/sda5) is being replicated between two storage servers. Now, I've added a second block device (/dev/sdb1) to each of these storage servers. In my r0.res, I have, amongst other configurations: on storage1 {

[DRBD-user] Trying to Understanding crm-fence-peer.sh

2019-01-11 Thread Bryan K. Walton
: drbd r0: helper command: /sbin/drbdadm fence-peer r0 exit code 4 (0x400) Jan 11 08:49:53 storage2 kernel: drbd r0: fence-peer helper returned 4 (peer was fenced) But the switch ports connected to the fenced node are still enabled. What am I missing here? Thanks! Bryan Walton -- Bryan K.

[DRBD-user] pacemaker failover problem without allow-two-primaries

2019-01-02 Thread Bryan K. Walton
Hi, I've building an HA cluster with two storage nodes. The two nodes are running DRBD 9 on CentOS 7. Storage1 is primary and storage2 is secondary. The two nodes do their DRBD replication over a bonded, directly cabled connection. Upstream, both storage nodes are connected to two Brocade ICX-74

Re: [DRBD-user] DRBD 9 Primary-Secondary, Pacemaker, and STONITH

2018-11-19 Thread Bryan K. Walton
On Wed, Nov 14, 2018 at 10:20:23AM +0100, Robert Altnoeder wrote: > >> Is the above quote stating that if Pacemaker can't confirm that one > >> node has been STONITHed, that it won't allow the remaining node to work, > >> either? > > At least in the default configuration, if fencing fails, the clu

Re: [DRBD-user] DRBD 9 Primary-Secondary, Pacemaker, and STONITH

2018-11-19 Thread Bryan K. Walton
On Wed, Nov 14, 2018 at 12:58:49PM +1100, Igor Cicimov wrote: > > I don't understand this. If the power fails to a node, then won't the > > node, by definition be down (since there is no power going to the node)? > > So, how then could there be a split brain when one node has no power? > > And ho

[DRBD-user] DRBD 9 Primary-Secondary, Pacemaker, and STONITH

2018-11-13 Thread Bryan K. Walton
I have a two-node DRBD 9 resource configured in Primary-Secondary mode with automatic failover configured with Pacemaker. I know that I need to configure STONITH in Pacemaker and then set DRBD's fencing to "resource-and-stonith". The nodes are Supermicro servers with IPMI. I'm planning to use IP

Re: [DRBD-user] Configuring a two-node cluster with redundant nics on each node?

2018-10-19 Thread Bryan K. Walton
On Thu, Oct 18, 2018 at 04:47:53PM +1100, Adi Pircalabu wrote: > > Why aren't you using Ethernet bonding? Thanks Adi, We are rethinking our network configuration. We may do our replication through a directly cabled and bonded connection, and bypass our switches. This would simplify our drbd co

[DRBD-user] Configuring a two-node cluster with redundant nics on each node?

2018-10-17 Thread Bryan K. Walton
Hi, I'm trying to configure a two-node cluster, where each node has dedicated redundant nics: storage node 1 has two private IPs: 10.40.1.3 10.40.2.2 storage node 2 has two private IPs: 10.40.1.2 10.40.2.3 I'd like to configure the resource so that the nodes have two possible paths to the other