[DRBD-user] On-line device verification
Hello, I have two questions about on-line device verification: 1. After a verification was started with 'drbdadm verify ', is it possible to detect that the verification has been finished? 2. When out-of-sync blocks were detected, what do the kernel log entries look like? Regards Christoph ___ Star us on GITHUB: https://github.com/LINBIT drbd-user mailing list drbd-user@lists.linbit.com https://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] drbd raid stays at state peer: Connecting
Hello, > I doubt there's something blocking the resource "hastig" as I assume the > resource was working fine in the past on both nodes? The resource is new and so far has never been in sync between the two nodes. > Does "drbdadm disconnect hastig" followed by "drbdadm adjust hastig" make a > difference? Hm. Executing that on the secondary node shows: ??: Failure: (162) Invalid configuration request additional info from kernel: unknown connection Command 'drbdsetup-84 disconnect ipv4:129.217.5.61:7790 ipv4:129.217.5.62:7790' terminated with exit code 10 On the other host the same output, only with ip addresses the other way round. Regards Christoph ___ Star us on GITHUB: https://github.com/LINBIT drbd-user mailing list drbd-user@lists.linbit.com https://lists.linbit.com/mailman/listinfo/drbd-user
[DRBD-user] drbd raid stays at state peer: Connecting
Hello, on two servers, I created two drbd resources, where one server is the primary for the first resource and the other server is the primary for the second resource. A dump of my setup is attached. Unfortunately, the drbd RAID works for the first resource, but it does not for the second. "drbdadm status" since almost two days now shows on the first machine: grobi role:Primary volume:2 disk:UpToDate peer role:Secondary volume:2 replication:Established peer-disk:UpToDate hastig role:Secondary volume:1 disk:Inconsistent volume:3 disk:Inconsistent volume:4 disk:Inconsistent volume:5 disk:Inconsistent peer: Connecting On the second machine, it shows: grobi role:Secondary volume:2 disk:UpToDate peer role:Primary volume:2 replication:Established peer-disk:UpToDate hastig role:Primary volume:1 disk:UpToDate volume:3 disk:UpToDate volume:4 disk:UpToDate volume:5 disk:UpToDate Any idea what is going on and what I can do now to make RAID hastig work? The two machines are in the same network and I have disabled all local ip filter rules, so I do not know what might be blocking the connection between the two servers ... Regards Christoph # /etc/drbd.conf global { usage-count yes; cmd-timeout-medium 600; cmd-timeout-long 0; } common { net { max-buffers 8000; max-epoch-size 8000; sndbuf-size0; } disk { al-extents 3389; } handlers { before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } } # resource grobi on ernie: not ignored, not stacked # defined at /etc/drbd.d/grobi.res:1 resource grobi { on ernie { volume 2 { device /dev/drbd2 minor 2; disk /dev/vg0/game; meta-diskinternal; } address ipv4 129.217.5.61:7789; } on bert { volume 2 { device /dev/drbd2 minor 2; disk /dev/vg0/game; meta-diskinternal; } address ipv4 129.217.5.62:7789; } } # resource hastig on ernie: not ignored, not stacked # defined at /etc/drbd.d/hastig.res:1 resource hastig { on ernie { volume 1 { device /dev/drbd1 minor 1; disk /dev/vg0/dhcp; meta-diskinternal; } volume 3 { device /dev/drbd3 minor 3; disk /dev/vg0/ls4_linf; meta-diskinternal; } volume 4 { device /dev/drbd4 minor 4; disk /dev/vg0/schlick; meta-diskinternal; } volume 5 { device /dev/drbd5 minor 5; disk /dev/vg0/tux; meta-diskinternal; } address ipv4 129.217.5.61:7790; } on bert { volume 1 { device /dev/drbd1 minor 1; disk /dev/vg0/dhcp; meta-diskinternal; } volume 3 { device /dev/drbd3 minor 3; disk /dev/vg0/ls4_linf; meta-diskinternal; } volume 4 { device /dev/drbd4 minor 4; disk /dev/vg0/schlick; meta-diskinternal; } volume 5 { device /dev/drbd5 minor 5; disk /dev/vg0/tux; meta-diskinternal; } address ipv4 129.217.5.62:7790; } } ___ Star us on GITHUB: https://github.com/LINBIT drbd-user mailing list drbd-user@lists.linbit.com https://lists.linbit.com/mailman/listinfo/drbd-user
[DRBD-user] Trying to setup drbd with keeping LV data
Hello, I am trying to setup drbd volumes on top of LVM logical volumes and want to keep the existing data on the logical volumes. I found a HOWTO for that, but when I try to create the metadata with 'drbdadm create- md', I get: md_offset 0 al_offset 4096 bm_offset 36864 Found some data ==> This might destroy existing data! <== Do you want to proceed? [need to type 'yes' to confirm] And a test on a less important machine showed that it really destroyed data. This is my resource file: resource jovelin { volume 1 { device/dev/drbd1; disk /dev/vg0/home; flexible-meta-disk /dev/vg0/home_metadata; } volume 2 { device/dev/drbd2; disk /dev/vg0/vservers; flexible-meta-disk /dev/vg0/vservers_metadata; } on tristan { address 129.217.5.65:7789; } on isolde { address 129.217.5.66:7789; } } Why does drbdadm create-md destroy data on the logical volume, though I defined metadata to be an external volume? What can I do now to create the drbd device without losing data? Regards Christoph ___ Star us on GITHUB: https://github.com/LINBIT drbd-user mailing list drbd-user@lists.linbit.com https://lists.linbit.com/mailman/listinfo/drbd-user
[DRBD-user] Fail-over when primary server is down
Hello, in section 4.6 of the user's guide, the basic manual fail-over process is described as containing a call of "drbdadm secondary " on the previous primary server. But how about when that server is so broken that it is completely down and the command cannot be entered? Will the previous secondary server take still accept the command "drbdadm primary "? Regards Christoph ___ Star us on GITHUB: https://github.com/LINBIT drbd-user mailing list drbd-user@lists.linbit.com https://lists.linbit.com/mailman/listinfo/drbd-user