Thank You,

Some more details of our situation:
Sometimes one of the two nodes (running most of the VMs) gets frozen
an is rebooted (from DRBD or Stonith, I'm not sure)

we use the dual primary DRBD resource for a cLVM
and the LVs of the cLVM for Xen images.
The cLVM and the Xen VMs are controlled by corosync/pacemaker,

the DRBD is independent from Pacemaker (the SuSE DRBD agent does
not support dual primary)


My question: Is this DRBD configuration suitable.

global {
        # usage-count yes;
        usage-count no;
        # minor-count dialog-refresh disable-ip-verification
}

common {
        handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }

        startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
                wfc-timeout 1;
                degr-wfc-timeout 1;
        }

        options {
                # cpu-mask on-no-data-accessible
        }

        disk {
# size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes
                # disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
        }

        net {
                # protocol timeout max-epoch-size max-buffers unplug-watermark
                # connect-int ping-int sndbuf-size rcvbuf-size ko-count
# allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
                # after-sb-1pri after-sb-2pri always-asbp rr-conflict
                # ping-timeout data-integrity-alg tcp-cork on-congestion
                # congestion-fill congestion-extents csums-alg verify-alg
                # use-rle
        }
}
resource r0 {
  startup {
    become-primary-on both;
  }
  net {
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
  }
  device /dev/drbd_r0 minor 0;
  meta-disk internal;
  on ha1infra {
    address 172.16.232.11:7788;
    disk /dev/disk/by-id/scsi-3600a0b8000421088000031e251d12e51-part1;
  }
  on ha2infra {
    address 172.16.232.12:7788;
    disk /dev/disk/by-id/scsi-360080e500036de180000035151d0f3e5-part1;
  }
# ACHTUNG Bandbreite 10Gbit etwa 1Gbyte etwa 1000Mbyte davon ein Drittel 300Mbyte wir nehmen 240Mbyte
  syncer {
    rate 240M;
  }
}

Actually we use only one Node until the situation is clear.

Karl


On 31.10.2013 09:31, Karl Rößmann wrote:
Hi,

we have a dual primary DRBD with a 8TB resource.
Is this ok ? Is there a size limit ?

Karl

Hi Karl,

Please look here: http://blogs.linbit.com/p/169/maximum-volume-size/

Albert.

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user




--
Karl Rößmann                            Tel. +49-711-689-1657
Max-Planck-Institut FKF                 Fax. +49-711-689-1632
Postfach 800 665
70506 Stuttgart                         email [email protected]
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to