Which version of drbdmanage/utils/kmod are you using?
Many issues have been sorted out in latest versions.
Yannis
On Wed, 30 Aug 2017 at 07:58, 杨成伟 wrote:
> Hi All,
>
> I'm following drbd 9.x user guide and has a 3 node(2 control, 1
> satellite) setup.
>
> However, I'm not sure what I've did
If there are only 2 nodes, it's better to stick to drbd8.
Yannis
On Tue, 29 Aug 2017 at 05:28, Digimer wrote:
> On 2017-08-28 09:28 PM, 大川敬臣 wrote:
> > I'm planning to build two MySQL DB servers that synchronized by DRBD 9.0
> > with RHEL 7.3.
> > I want to enable checksum-based synchronizatio
rote:
Il 26-08-2017 13:56 Yannis Milios ha scritto:
> Have you considered a HA NFS over a 2-node DRBD8 cluster ? Should work
> well on most hypervisors (qcow2,raw,vmdk based).
>
> Yannis
>
Hi Yannis, yes, I considered that.
However, as this would be a converged setup (ie: virtual
Have you considered a HA NFS over a 2-node DRBD8 cluster ? Should work well
on most hypervisors (qcow2,raw,vmdk based).
Yannis
On 26 Aug 2017 12:16, "A.Rubio" wrote:
> Hi all.
>
> I writted a book about this every years ago for a course.
>
> The book is about pacemaker, drbd, gfs2 and kvm, for
Hello,
Personally I'm using option (a) on a 3 node proxmox cluster and drbd9.
Replica count per VM is 2 and all 3 nodes act as both drbd control volumes
and satellite nodes.I can live migrate VM over all 3 nodes without issues.
Snapshots are also possible via drbdmanage + zfs snapshot + clones
cap
Hello,
Personally I'm using option (a) on a 3 node proxmox cluster and drbd9.
Replica count per VM is 2 and all 3 nodes act as both drbd control volumes
and satellite nodes.I can live migrate VM between all nodes and snapshot
them by using drbdmanage utility (which is using zfs snapshot+clones).
>> so drbd does not works after reboot (the old drbd_transport_tcp loading
fails, >> of course)
I had the same problem but I managed to overcome it by issuing the
following commands (after the upgrade to 9.0.8):
>
> dkms remove -m drbd -v 9.0.8+linbit-1 --all
>
> dkms install -m drbd -v 9.0.8+l
Thanks for bringing the latest version in PVE repo!
On Wed, 28 Jun 2017 at 07:53, Roland Kammerer
wrote:
> On Tue, Jun 27, 2017 at 10:40:40PM +0200, Roberto Resoli wrote:
> > Il 19/06/2017 18:32, Philipp Reisner ha scritto:
> > > Hi,
> > >
> > > finally it is there! With two rc's and surprisin
Hello,
There seems to be something preventing creating control volumes on ha11b.
Have you tried to run 'drbdmanage uninit' on that node first? That will
wipe any existing drbd control volume information in that node then you
can run 'drbdmanage add-node ...' as you did on the first node.
Also, wh
You can get an idea here:
http://drbd.linbit.org/en/doc/users-guide-84/re-drbdconf
On Tue, May 16, 2017 at 8:24 AM, DrmFsLxD_17@[JF&LING] <2814935...@qq.com>
wrote:
> hi,
> i want to create three resources for drbd, how should I do?
> drbd version :8.4.2
> and I use protocol C(primary/pr
Have you tried 'drbdmanage resume-all' ?
'drbdmanage restart' sometimes can help as well in such situation.
On Mon, 10 Apr 2017 at 11:31, Michele Rossetti wrote:
> Any help or suggestion about my previous post?
> Michele
> ___
> drbd-user mailing lis
Not 100% sure but the first 'partition' could be where drbd stores metadata?
Your setup should be fine as long ad you can deal in an easy manner with
future resizing of the backing device (sde).
Yannis
root@iscsi2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
Thanks a lot for the detailed reply!
Regards,
Yannis
On Fri, Mar 24, 2017 at 11:43 AM, Roland Kammerer <
roland.kamme...@linbit.com> wrote:
> On Fri, Mar 24, 2017 at 11:27:56AM +0000, Yannis Milios wrote:
> > Building initial module for 4.10.1-2-pve
> > Error! Bad return st
Hello,
Proxmox has just released PVE5 beta. On this version they have removed
DRBD9 kernel module from their kernel and have replaced it with DRBD 8.4.7
instead.
They said that DRBD9 kernel module is supported via DKMS, however I tried
to install it yesterday by using Linbit's PVE4 repo but it fai
You have to investigate what's wrong on pve3 at the lower layers. What type
of storage plugin are you using as backing device for DRBD? LVM ? I'm using
ZFS and so far I'm very happy with it (Thin LVM didn't work well for me).
On pve3:
--
drbdadm status
You can also grep for anything
>
>
> ==
> mke2fs 1.42.12 (29-Aug-2014)
> The file /dev/drbd/by-res/vm-130-disk-1/0 does not exist and no size was
> specified.
In my case I get 'Cannot notify leader' when creating a container on a
target which is not in Primary (drbdctrl control node) mode.
So, whenever I need to create a c
Ok, they are all there, so you just need to remove them by using drbdmanage
remove-resource command...
> How can we delete the disk from vm-101-disk-1 to vm-101-disk-6
| vm-101-disk-6 | 0 | 32 GiB | 110 |
|ok |
| vm-101-disk-5 | 0 | 32 GiB | 109 |
What does:
drbdmanage v and drbdmanage a
...shows? Are those resources/volumes still there?
If yes, then possibly by giving:
drbdmanage remove or
...may fix things? You could also try to force remove them (check
drbdmanage remove syntax).
Yannis
On Fri, 17 Mar 2017 at 11:24, Michele Rosse
>> vm-107-disk-1 role:Secondary
>> px2.cluster.local role:Secondary
Since the resource is in Secondary mode on both nodes, it's not possible to
start the VM. You must fix that first.
Two node setup in drbd9 it's not a good idea, you need 3 for quorum (the
third one can be just a control node).
>> pending actions: reconfigure
Fist, sorry for suggesting you that link, you shouldn't use drbdadm
directly since drbdmanage will override its settings (thanks to Den noting
that).
Secondly, those pending actions mean that drbdmanage has not applied the
resize actions for some reason(s).
You wil
>the drdb storage becomes unavailable >and the drbd quorum is lost..
>From my experience using only 2 nodes on drbd9 does not work well, meaning
that the cluster loose quorum and you have to manually troubleshoot the
split brain.
If you really need a stable system, then use 3 drbd nodes. You could
Does this help?
https://www.drbd.org/en/doc/users-guide-90/s-resizing
Can you post as well the status of the following commands?
drbdmanage n
drbdmanage a
Yannis
On Mon, 13 Mar 2017 at 08:31, Frank Rust wrote:
> I tried to grow a resource on a drbd9 managed cluster.
> I have a system with 2
Hello,
First, excuse me if this question is not related to DRBD devs, but I wasn't
sure where else I should ask for.
I'm testing DRBD9 ZFS storage plugin on a 3 node PVE cluster.
So far everything work flawlessly and I believe that DRBD9 with ZFS make a
great pair!
However I was wondering if there
>> 104:vm-104-disk-1/0 Conn(mpve3)/C'ng(mpve1,mpve2)
>>Seco(mpve3)/Unkn(mpve1,mpve2) UpTo(mpve3)/DUnk(mpve1,mpve2)
resource vm-104-disk-1 is in Secondary mode on mpve3. It should be in
Primary mode for the vm 104 to start.
>>105:vm-105-disk-1/0 Conn(mpve3)/C'ng(mpve1,mpve2)
Seco(mpve3)/Unkn(mpv
Great! many thanks for this..
Regards,
Yannis
On Mon, Feb 13, 2017 at 5:15 PM, Robert Altnoeder <
robert.altnoe...@linbit.com> wrote:
> On 02/13/2017 05:57 PM, Yannis Milios wrote:
> > pve3 still appear to have the resources available and refuses to leave
> > the cluster.
Hello,
I'm testing drbd9 on test 3 node pve cluster. I have created a test vm
(vm115) via pve web interface.
As a result, a new drbd9 resource and volume was created on drbd9 storage
pool (vm-115-disk-1).
After testing the live migration between all 3 nodes I realised that one of
the nodes (pve3)
Well I would wipe the secondary then recreate md and start sync from
scratch.
Make sure that you definitely don't need anything from the secondary before
proceeding though...
--
Sent from Gmail Mobile
___
drbd-user mailing list
drbd-user@lists.linbit.co
>
>
> >>I think this document might help:
> >>https://www.drbd.org/en/doc/users-guide-83/s-initial-full-sync
>
> >>or specifically, this command:
>
> *>>drbdadm -- --overwrite-data-of-peer primary resource*
>
>
I think that will work only for the initial sync and not in this case that
data/metadata
>> Dec 5 10:20:07 localhost kernel: block drbd0: Writing the whole bitmap,
full sync required after drbd_sync_handshake.
>>Dec 5 10:20:07 localhost kernel: block drbd0: meta connection shut down
by peer.
It seems that the secondary node needs a full resync of the data. Is this a
drbd setup insid
Forgot to say that Im using the source package from here:
http://oss.linbit.com/drbd/
On 17 November 2015 21:08:52 GMT+00:00, Yannis Milios
wrote:
>Hello,
>
>I'm trying to compile DRBD 8.4.6 against 4.2.3-2-pve kernel (Proxmox
>4.0) but I get the following error:
>
>[QU
Hello,
I'm trying to compile DRBD 8.4.6 against 4.2.3-2-pve kernel (Proxmox 4.0) but I
get the following error:
[QUOTE]Need a git checkout to regenerate drbd/.drbd_git_revision
make[1]: Entering directory '/root/drbd-8.4.6/drbd'
Calling toplevel makefile of kernel source tree, which I belie
Hello Richard,
Not sure what caused Inconsistent state on your drbd resource on Node1, but
my guess is that it experienced some kind of low level corruption on it's
backing storage (hard disks) and auto sync was initiated from Node2.
Are you using linux raid? Then probably you don't have a batter
hello,
I've been using zfs on linux for some time and it seems really stable.
I was wondering if zfs could be a better option than lvm as a backing
device for drbd (zvols).
Is zfs on linux a supported option from drbd side or not yet?
thank you
___
drbd
hello all!
I have a two node primary/primary config.
My config is as follows:
resource r0 {
protocol C;
startup {
wfc-timeout 15; # non-zero wfc-timeout can be dangerous (
http://forum.p
roxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
degr-wfc-timeout
101 - 134 of 134 matches
Mail list logo