Hi David,
have a look at the documentation of drbdadm; the commands up, down and
adjust might be what you are looking for.
Best regards,
// Veit
Am Montag, den 22.07.2019, 16:34 + schrieb Banks, David (db2d):
> Hello,
>
> Is there a way to delay the loading of DRBD resources until after
Hi,
just to warm up some older discussions; might your setup be using files opened
with O_DIRECT? This allows the modification of buffers in-flight (read: changes
while they are being sent to the peer) and is commonly used by some
applications to bypass Linux' caching mechanisms.
If you are
Hi Michael,
my hint is to have a look at the wipe-md command of drbdadm resp.
drbdmeta.
Alternatively it might suffice to simply resize the FS on the former
backing device to the backing device's full size. As integrated meta
data is stored at the end of the block device, resizing the FS would
Am Donnerstag, den 26.07.2018, 17:31 +0200 schrieb Lars Ellenberg:
> > global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> >
> > or even more strict:
> >
> > global_filter = [ "a|^/dev/md4$|", "r/.*/" ]
>
> Uhm, no.
> Not if he want DRBD to be his PV...
> then he needs to exclude (reject) the
Hi Eric,
Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> Would there really be a PV signature on the backing device? I didn't turn md4
> into a PV (did not run pvcreate /dev/md4), but I did turn the drbd disk into
> one (pvcreate /dev/drbd1).
both DRBD and mdraid put their
Hi Roman,
what you experienced is the expected behaviour of a primary-primary
setup when the nodes are being disconnected from each other. It is
called split-brain situation and ensures that data stays
available/accessible on both sides without further corruption.
Usually you want to set up a
Well, I assume you are on el7 here. Adapt to other distros if required.
1. Install dkms, for el7 it is available in EPEL:
# yum install dkms
2. Untar the drbd tarball in /usr/src/, for drbd 8.4.11-1, you should
now have a directory /usr/src/drbd-8.4.11-1/.
3. Create a file
Hi Eric,
if your distro is el (e.g. RHEL/CentOS/Scientific), the kernel ABI
*should* not change during kernel updates, and copying modules from
older kernel versions as "weak updates" is not uncommon, following the
slogan "old module is better than no module". This is for example the
case for
Hi GC,
keeping track of changed blocks is implemented using bitmaps, which are
part of the meta data on both sides. These bitmaps are always of full
size, so they will not grow by just changing blocks.
So unless you added something consuming ressources, e.g. event handlers
performing LVM
Hi Nico,
if you want DRBD to promote one node automatically at boot-up, your
resource file(s) will need a startup{} section featuring a
become-primary-on statement.
If your setup is not yet in production, you might consider using single
DRBD ressources per VM instead of a replicated filesystem,
Hi Ondrej,
yes, this is perfectly normal in single-primary environments. DRBD simply does
not permit to access the resource block devices until it is promoted to
primary. What you describe would only work in dual-primary environments, but
running such an environment also requires a lot more
Hi Jan,
Am Dienstag, den 10.10.2017, 13:11 +1300 schrieb Jan Bakuwel:
> Thanks for that. Must say that possibility has escaped my attention so
> far. I'm using DRBD in combination with Xen and LVM for VMs so I assume
> O_DIRECT is in play here. Any suggestions where to go from here? A
> search
Hi Jan,
Am Dienstag, den 10.10.2017, 06:56 +1300 schrieb Jan Bakuwel:
> I've seen OOS blocks in the past where storage stack appeared to be fine
> (hardware wise). What possible causes could there be? Hardware issues, bugs
> in storage stack including DRBD itself, network issues. In most (all?)
Hi Jan,
Am Sonntag, den 08.10.2017, 13:07 +1300 schrieb Jan Bakuwel:
> I'd like to include an automatic disconnect/connect on the secondary if
> out-of-sync blocks were found but so far I haven't found out how I can
> query drbd to find out (apart from parsing the log somehow). I hope
>
Am Freitag, den 18.08.2017, 15:46 +0200 schrieb Gionatan Danti:
> Il 18-08-2017 14:40 Veit Wahlich ha scritto:
> > VM live migration requires primary/primary configuration of the DRBD
> > ressource accessed by the VM, but only during migration. The ressource
> > can be recon
To clarify:
Am Freitag, den 18.08.2017, 14:34 +0200 schrieb Veit Wahlich:
> hosts simultaniously, enables VM live migration and your hosts may even
VM live migration requires primary/primary configuration of the DRBD
ressource accessed by the VM, but only during migration. The ressource
Hi,
Am Freitag, den 18.08.2017, 14:16 +0200 schrieb Gionatan Danti:
> Hi, I plan to use a primary/secondary setup, with manual failover.
> In other words, split brain should not be possible at all.
>
> Thanks.
having one DRBD ressource per VM also allows you to run VMs on both
hosts
Hi Dirk,
this is most likely caused by a NIC model not supported by the guest OS'
drivers, but this is also totally off-topic redarding this is the DRBD
ML. I suggest you consult the KVM/qemu/libvirt MLs on this topic
instead.
Beste Gruesze,
// Veit
Am Freitag, den 04.08.2017, 13:59 +
Hi Luke,
I assume you are experiencing the results of data inconsistency by
in-flight writes. This means that a process (here your VM's qemu) can
change a block that already waits to be written to disk.
Whether this happens (undetected) or not depends on how the data is
accessed for writing and
Hi Dan,
Am Donnerstag, den 18.08.2016, 10:33 -0600 schrieb dan:
> Simplest solution here is to overbuild. If you are going to do a
> 3-node 'full-mesh' then you should consider 10G ethernet (a melanox w/
> cables on ebay is about US$20 w/ cables!). Then you just enable STP
> on all the bridges
Am Donnerstag, den 18.08.2016, 12:33 +0200 schrieb Roberto Resoli:
> Il 18/08/2016 10:09, Adam Goryachev ha scritto:
> > I can't comment on the DRBD related portions, but can't you add both
> > interfaces on each machine to a single bridge, and then configure the IP
> > address on the bridge.
.
>
>any suggestions to find out what happened?
>
> 2016-05-19 4:50 GMT+08:00 Veit Wahlich <cru.li...@zodia.de>:
> > Are you utilising SSDs?
> >
> > Is the kernel log (dmesg) clean from errors on the backing devices (also
> > mdraid members/backing de
com>
Gesendet: 18. Mai 2016 18:27:00 MESZ
An: Veit Wahlich <cru.li...@zodia.de>
CC: drbd-user@lists.linbit.com
Betreff: Re: [DRBD-user] drbdadm verify always report oos
Hi:
I shutdown the vm when I found the strange behavior. so the drbd
is resync under idle situation. I try to play with co
Hi,
how did you configure die VMs' disk caches? In case of qemu/KVM/Xen it
is essential for consistency to configure cache as "write through", any
other setting is prone to problems due to double-writes, unless the OS
inside of the VM uses write barriers.
Although write barriers are default for
Hi Roland,
thank you for your reply.
May I suggest to update the documentation accordingly?
Best regards,
// Veit
> Hi,
>
> it is expected behavior, don't worry, everything is fine. The lowest bit
> is used to encode the role (1 == primary, 0 == secondary IIRC).
>
> Regards, rck
Am Montag, den 02.11.2015, 17:14 +0100 schrieb Lars Ellenberg:
> res=XYZ
> if drbdsetup show $res | grep -q allow-two-primaries; then
> echo "Two primaries are allowed."
> else
> echo "Two primaries are not allowed."
> fi
>
> complicated to parse?
> where?
Well, for safety reasons I
Hi,
I am observing a quite strage behaviour:
The generation identifier UUIDs on my nodes differ although the volumes
are connected and reported in-sync (in fact they are really in-sync, I
ran a verify).
Regarding to https://drbd.linbit.com/users-guide/s-gi.html this should
not happen.
I find
Am Montag, den 02.11.2015, 15:53 +0100 schrieb Lars Ellenberg:
> What's wrong with
> # drbdsetup XYZ show --show-defaults
That is exactly what I was looking for, thank you!
Just a little more complicated to parse than I hoped for, but feasible
without problems.
Regards,
// Veit
Hi,
is there a (preferred) method to determine whether a resource is
currently in dual-primary mode, e.g. show the active net-options?
I use "drbdadm net-options --protocol=C --allow-two-primaries " to
active dual-primary mode temporarily for a resource and promote it on
the former secondary, so
Hi Ivan,
thank you for your suggestions.
Am Donnerstag, den 29.10.2015, 18:28 +0200 schrieb Ivan:
> you may be interested by this:
>
> https://github.com/taradiddles/cluster/blob/master/libvirt_hooks/qemu
>
> I wrote it some time ago as a qemu hook before ending setting up a full
> fledged
30 matches
Mail list logo