What does node-id do?  I have never used it and it's not listed in "Configuring your resource" https://docs.linbit.com/docs/users-guide-8.4/#s-configure-resource

My set up looks more like the example here:

resource r0 {
  on alice {
    device    /dev/drbd1;
    disk      /dev/sda7;
    address   10.1.1.31:7789;
    meta-disk internal;
  }
  on bob {
    device    /dev/drbd1;
    disk      /dev/sda7;
    address   10.1.1.32:7789;
    meta-disk internal;
  }
}

Is it a drbd 9.0.x / Linstore thing?  I'm using 8.9.6

*Paul O'Rorke*
*Tracker Software Products (Canada) Limited *
www.tracker-software.com <http://www.tracker-software.com/>
Tel: +1 (250) 324 1621
Fax: +1 (250) 324 1623

<http://www.tracker-software.com/>

Support:
http://www.tracker-software.com/support
Download latest Releases
http://www.tracker-software.com/downloads/




On 2019-01-17 1:19 a.m., Roland Kammerer wrote:
On Wed, Jan 16, 2019 at 09:47:05AM -0500, Shawn Southern wrote:
I'm very new to DRBD, so please bear with me if my terminology is wrong (or
if I've completely done everything wrong, please let me know!).  I've not
had much luck with finding this scenario in the documentation.

Initially, I only had a single system I could configure.  The other
identical system was running Hyper-V, so I had to get a system up with
CentOS, KVM & DRBD then migrate the workloads, then reformat the Hyper-V
box as CentOS, KVM & DRBD).

System 1 (vmh1) is 10.13.119.15, System 2 (vmh2) that I have to add is
10.13.119.16.  I've got the workload migrated to 'vmh1' and 'vmh2' is now
reinstalled with CentOS, KVM & DRBD.  What I need to know is how do I
safely add the sdb & sdc in vmh2 to the existing single node DRBD so that
it safely replicates the data and I don't lose my workload?
On both sides you change the res file like this:

in /etc/drdb.d I have:
file global_common.conf:
global {
  usage-count no;
}
common {
  net {
   protocol C;
  }
}

file drbd0.res:
resource drbd0 {
         device /dev/drbd0;
         meta-disk internal;
         net {
                 cram-hmac-alg sha256;
                 shared-secret "ThisIsSecret1";
         }
         on vmh1 {
                 node-id 1;
                 address 10.13.119.15:7789;
                 disk /dev/sdb;
         }
^^ you add a "on vmh2" section with a different node-id;

^^ you add a "connection-mesh" section naming both nodes.

}
Then you:
- "drbdadm create-md" meta data on the second host
- "drbdadm adjust" on both.

Regards, rck
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to