Hi all,

I'm trying to set up DRBD to use with KVM virtual machines so that I can have physical redundancy on my services. I followed this to use LVM volumes as my DRBD backing device: http://www.drbd.org/users-guide-8.3/s-lvm-lv-as-drbd-backing-dev.html and then did this http://www.drbd.org/users-guide-8.3/s-first-time-up.html to enable the resource r0.

I did the initial device synchronization as per this: http://www.drbd.org/users-guide-8.3/s-initial-full-sync.html and got the expected results. I actually did this twice for 2 resources, it took a while but eventually reported the 20GB volumes as:

   root@kvm-srv-01:~# cat /proc/drbd
   version: 8.3.11 (api:88/proto:86-96)
   srcversion: 41C52C8CD882E47FB5AF767

     1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
        ns:0 nr:0 dw:0 dr:1220 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f
   oos:0
     2: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
        ns:20970844 nr:0 dw:0 dr:20971696 al:0 bm:1280 lo:0 pe:0 ua:0
   ap:0 ep:1 wo:f oos:0

and

   root@kvm-srv-02:~# cat /proc/drbd
   version: 8.3.11 (api:88/proto:86-96)
   srcversion: 41C52C8CD882E47FB5AF767

     1: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
        ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
     2: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
        ns:0 nr:20970844 dw:20970844 dr:672 al:0 bm:1280 lo:0 pe:0 ua:0
   ap:0 ep:1 wo:f oos:0

The 2 resources are each to be used for testing guest Virtual Machine(s) with KVM/DRBD. In the first instance I installed the guest before creating the DRBD block device and after synchronization all the files were available to use in a VM. In the second resource I created the DRBD block device before installing the VM. I did this because I want to be both creating new VMs and migrating existing ones to DRBD.

What I'm seeing is that changes to the primary do not seem to be on the secondary. My understanding is that using Single Primary mode with Protocol C should mean that all disk writes to the primary should be complete after a successful write on the secondary.

If the writes are going to both nodes shouldn't I see non zero values for Disk reads and Disk writes in /proc/drbd?

Perhaps the issue is that I'm using the LVM volumes directly for my KVM VMs. I'm not sure if I should be approaching this with a nested approach like this http://www.drbd.org/users-guide-8.3/s-nested-lvm.html

Does anyone have any thoughts on why my data doesn't seem to be replicating or perhaps point me in the right direction for how to dig a little deeper. My limited understanding is that DRBD should be between my host and the LVM volume but perhaps it's writing directly to the LV and not going through DRBD.

Thanks in advance.

--

*Paul O'Rorke*
Tracker Software Products
p...@tracker-software.com <mailto:paul.oro...@tracker-software.com>

++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PLEASE NOTE : - If you are sending files for us to look at or assist with
these must ALWAYS be wrapped in either a ZIP/RAR or 7z FILE
or they will be removed by our Firewall/Virus management software.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++

**Certified by Microsoft**
"Works with Vista"
PDF-XChange & SDK, Image-XChange
PDF-Tools & SDK, TIFF-XChange & SDK.

Support:
http://tracker-software.com/support/
or
http://www.tracker-software.com/forum/index.php

Download latest Releases
http://www.tracker-software.com/downloads/

resource r0 {
  device    /dev/drbd1;
  disk      /dev/mapper/VirtualMachines-test;
  meta-disk internal;
  on kvm-srv-01 {
    address   192.168.2.31:7789;
  }
  on kvm-srv-02 {
    address   192.168.2.32:7789;
  }
}
resource r1 {
  device    /dev/drbd2;
  disk      /dev/VirtualMachines/test2;
  meta-disk internal;
  on kvm-srv-01 {
    address   192.168.2.41:7789;
  }
  on kvm-srv-02 {
    address   192.168.2.42:7789;
  }
}
global {
        usage-count yes;
        # minor-count dialog-refresh disable-ip-verification
}

common {
        protocol C;

        handlers {
                # The following 3 handlers were disabled due to #576511.
                # Please check the DRBD manual and enable them, if they make 
sense in your setup.
                # pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; 
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot 
-f";
                # pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; 
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot 
-f";
                # local-io-error "/usr/lib/drbd/notify-io-error.sh; 
/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt 
-f";

                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target 
"/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target 
/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }

        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout 
wait-after-sb
        }

        disk {
                # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
                # no-disk-drain no-md-flushes max-bio-bvecs
        }

        net {
                # sndbuf-size rcvbuf-size timeout connect-int ping-int 
ping-timeout max-buffers
                # max-epoch-size ko-count allow-two-primaries cram-hmac-alg 
shared-secret
                # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg 
no-tcp-cork
        }

        syncer {
                # rate after al-extents use-rle cpu-mask verify-alg csums-alg
        }
}
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to