Re: [DRBD-user] drbd diskless
Many thanks for the reply , I get error answer with adjust . The problem solved with the following 1) drbdadm disconnect customer_vms 2) drbdadm dettach customer_vms 3) drbdadm -c /etc/drbd.conf create-md customer_vms 4) reboot node1 After reboot everything is ok and synchronize the data without problem . Is it possible to update the drbd driver by replacing the drbd.ko with the new ? Sincerely , On 07/30/2018 10:53 AM, Roland Kammerer wrote: On Fri, Jul 27, 2018 at 02:39:23PM +0300, Vaggelis Papastavros wrote: Dear friends , i have installed drbd 9.0.8-1 cat /proc/drbdversion: 9.0.8-1 (api:2/proto:86-112) GIT-hash: c8bc36701c7c7ffc2c208f620c6d89e4ec265704 build by phil@Build64R7, 2017-06-30 15:43:01 brrr. That is *ways* too old. This version contains many known bugs. Don't use it. Really, I mean it. [root@sgw-01 ~]# drbdadm attach customer_vms 0: Failure: (127) Device minor not allocated additional info from kernel: unknown minor Command 'drbdsetup attach 0 /dev/sda2 /dev/sda2 internal --on-io-error=detach' terminated with exit code 10 Looks like you started with the "attach"? What do you get on "up", or "adjust"? Regards, rck ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] drbd+lvm no bueno
> > > > Lars, > > > > > > > > I put MySQL databases on the drbd volume. To back them up, I pause > > > > them and do LVM snapshots (then rsync the snapshots to an archive > > > > server). How could I do that with LVM below drbd, since what I > > > > want is a snapshot of the filesystem where MySQL lives? > > > > > > You just snapshot below DRBD, after "quiescen" the mysql db. > > > > > > DRBD is transparent, the "garbage" (to the filesystem) of the > > > "trailing drbd meta data" is of no concern. > > > You may have to "mount -t ext4" (or xfs or whatever), if your mount > > > and libblkid decide that this was a "drbd" type and could not be > > > mounted. They are just trying to help, really. > > > which is good. but in that case they get it wrong. > > > > Okay, just so I understand > > > > Suppose I turn md4 into a PV and create one volume group > > 'vg_under_drbd0', and logical volume 'lv_under_drbd0' that takes 95% > > of the space, leaving 5% for snapshots. > > > > Then I create my ext4 filesystem directly on drbd0. > > > > At backup time, I quiesce the MySQL instances and create a snapshot of > > the drbd disk. > > > > I can then mount the drbd snapshot as a filesystem? > > Yes. > Though obviously, those snapshots won't "failover", in case you have a node > failure and failover during the backup. > Snapshots in a VG "on top of" DRBD do failover. Your advice (and Veit's) was spot on. I rebuilt everything with LVM under drbd instead of over it, added the appropriate filter in lvm.conf, and rebuilt my initramfs, and everything is working great. Failover works as expected without volume activation problems, and I could snapshot the drbd volume and mount it as a filesystem. You all hit it out of the park. Thanks! ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] linstor-proxmox-2.8
> Yes, "start" is pretty obvious and in the article. Sure, "enable" is > also a good idea, but the interesting thing is: Did you really have to > "unmask" it? > > AFAIR yes, I had to unmask the service in order to enable and then eventually start it. But perhaps this was true when I installed linstor-server package? not sure. Perhaps when installing linstor-controller package, you don't have to do this step...will have to re-check.. ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] linstor-proxmox-2.8
On Mon, Jul 30, 2018 at 11:58:22AM +0100, Yannis Milios wrote: > > > > > However, in your blog post you mention > > > linstor-controller,linstor-satellite and linstor-client. > > > > That is what you should do. > > > > You are right, that's what I ended to do and now everything works perfectly. > In addition, I had to enable/start linstor-satellite service on all > satellite nodes. Yes, that is in the blog post. > and unmask/start linstor controller service on the controller node. Yes, "start" is pretty obvious and in the article. Sure, "enable" is also a good idea, but the interesting thing is: Did you really have to "unmask" it? ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] linstor-proxmox-2.8
On Mon, 30 Jul 2018 at 09:09, Roland Kammerer wrote: > On Fri, Jul 27, 2018 at 01:52:55PM +0100, Yannis Milios wrote: > > One last thing I forgot to mention in the last post is ... > > > > When creating a VM or CT via PVE webgui it fails with the below: > > > > > https://privatebin.net/?dd4373728501c9eb#FsTXbEfRh43WIV4q7tO5wnm0HdW0O/gJbwavrYCgkeE= > > Okay, that is something for the LINSTOR people to look into. Maybe that > happend because of the "linstor-server" vs. > "linstor-controller/linstor-satellite" confusion and the according > service was not started there. > Yes, that's correct. It happens only with linstor-server package. As soon as you remove it and install linstor-satellite and linstor-controller, it works as expected. ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] linstor-proxmox-2.8
> > > However, in your blog post you mention > > linstor-controller,linstor-satellite and linstor-client. > > That is what you should do. > You are right, that's what I ended to do and now everything works perfectly. In addition, I had to enable/start linstor-satellite service on all satellite nodes and unmask/start linstor controller service on the controller node. > Forget about the "linstor-server" package. Never ever use it (on Debian > based systems). > > Ok, will do. BR Y ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] [Pacemaker] Pacemaker unable to start DRBD
On Tue, Jul 24, 2018 at 07:27:28AM +, Jaco van Niekerk wrote: > Hi > > I am using the following packages: > pcs-0.9.162-5.el7.centos.1.x86_64 > kmod-drbd84-8.4.11-1.el7_5.elrepo.x86_64 > drbd84-utils-9.3.1-1.el7.elrepo.x86_64 > pacemaker-1.1.18-11.el7_5.3.x86_64 > corosync-2.4.3-2.el7_5.1.x86_64 > targetcli-2.1.fb46-6.el7_5.noarch > kernel-3.10.0-862.9.1.el7.x86_64 > > my /etc/drbd.conf > global { usage-count no; } > common { protocol C; } > resource imagesdata { > on node1.san.localhost { > device /dev/drbd0; > disk /dev/vg_drbd/lv_drbd; > address 192.168.0.2:7789; > meta-disk internal; > } > on node2.san.localhost { > device /dev/drbd0; > disk /dev/vg_drbd/lv_drbd; > address 192.168.0.3:7789; > meta-disk internal; > } > } > > my /etc/corosync/corosync.conf: > totem { > version: 2 > secauth: off > cluster_name: san_cluster > transport: udpu > interface { > ringnumber: 0 > bindnetaddr: 192.168.0.0 > broadcast: yes > mcastport: 5405 > } > } > > nodelist { > node { > ring0_addr: node1.san.localhost > name: node1 > nodeid: 1 > } > > node { > ring0_addr: node2.san.localhost > name: node2 > nodeid: 2 > } > } > > quorum { > provider: corosync_votequorum > two_node: 1 > wait_for_all: 1 > last_man_standing: 1 > auto_tie_breaker: 0 > } > > logging { > to_logfile: yes > logfile: /var/log/cluster/corosync.log > to_syslog: yes > } > > DRBD Starts perfectly fine and syncs as well. > > Pacemaker setup: > pcs cluster auth node1.san.localdomain node2.san.localdomain -u hacluster -p > PASSWORD > pcs cluster setup --name san_cluster node1.san.localdomain > node2.san.localdomain > pcs cluster start --all > pcs cluster enable --all > pcs property set stonith-enabled=false > pcs property set no-quorum-policy=ignore > > The following command doesn't work: > pcs resource create my_iscsidata ocf:linbit:drbd drbd_resource=iscsidata op > monitor interval=10s > pcs resource master MyISCSIClone my_iscsidata master-max=1 master-node-max=1 > clone-max=2 clone-node-max=1 notify=true > > I receive the following on pcs status: > * my_iscsidata_monitor_0 on node2.san.localhost 'not configured' (6): > call=9, status=complete, exitreason='meta parameter misconfigured, > expected clone-max -le 2, but found unset.', Well, yes, that is expected. "Prepare" the pacemaker configuration in a "shadow cib", then "commit" (or "cib push", in pcs speak) the finished config as one into the live cib. Don't take shortcuts omitting the "shadow cib". -- : Lars Ellenberg : LINBIT | Keeping the Digital World Running : DRBD -- Heartbeat -- Corosync -- Pacemaker DRBD® and LINBIT® are registered trademarks of LINBIT __ please don't Cc me, but send to list -- I'm subscribed ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] drbd+lvm no bueno
On Fri, Jul 27, 2018 at 10:45:46AM +, Eric Robinson wrote: > > > Lars, > > > > > > I put MySQL databases on the drbd volume. To back them up, I pause > > > them and do LVM snapshots (then rsync the snapshots to an archive > > > server). How could I do that with LVM below drbd, since what I want is > > > a snapshot of the filesystem where MySQL lives? > > > > You just snapshot below DRBD, after "quiescen" the mysql db. > > > > DRBD is transparent, the "garbage" (to the filesystem) of the "trailing drbd > > meta data" is of no concern. > > You may have to "mount -t ext4" (or xfs or whatever), if your mount and > > libblkid decide that this was a "drbd" type and could not be mounted. They > > are > > just trying to help, really. > > which is good. but in that case they get it wrong. > > Okay, just so I understand > > Suppose I turn md4 into a PV and create one volume group > 'vg_under_drbd0', and logical volume 'lv_under_drbd0' that takes 95% > of the space, leaving 5% for snapshots. > > Then I create my ext4 filesystem directly on drbd0. > > At backup time, I quiesce the MySQL instances and create a snapshot of > the drbd disk. > > I can then mount the drbd snapshot as a filesystem? Yes. Though obviously, those snapshots won't "failover", in case you have a node failure and failover during the backup. Snapshots in a VG "on top of" DRBD do failover. > > > How severely does putting LVM on top of drbd affect performance? > > > > It's not the "putting LVM on top of drbd" part. > > it's what most people think when doing that: > > use a huge single DRBD as PV, and put loads of unrelated LVS inside of that. > > > > Which then all share the single DRBD "activity log" of the single DRBD > > volume, > > which then becomes a bottleneck for IOPS. > > > > I currently have one big drbd disk with one volume group over it and > one logical volume that takes up 95% of the space, leaving 5% of the > volume group for snapshots. I run multiple instances of MySQL out of > different directories. I don't see a way to avoid the activity log > bottleneck problem. One LV -> DRBD Volume -> Filesystem per DB instance. If the DBs are "logically related", have all volumes in one DRBD resource. If not, separate DRBD resources, one volume each. But whether or not that would help in your setup depends very much on the typical size of the changing "working set" of the DBs. -- : Lars Ellenberg : LINBIT | Keeping the Digital World Running : DRBD -- Heartbeat -- Corosync -- Pacemaker DRBD® and LINBIT® are registered trademarks of LINBIT __ please don't Cc me, but send to list -- I'm subscribed ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] linstor-proxmox-2.8
On Fri, Jul 27, 2018 at 01:52:55PM +0100, Yannis Milios wrote: > One last thing I forgot to mention in the last post is ... > > When creating a VM or CT via PVE webgui it fails with the below: > > https://privatebin.net/?dd4373728501c9eb#FsTXbEfRh43WIV4q7tO5wnm0HdW0O/gJbwavrYCgkeE= Okay, that is something for the LINSTOR people to look into. Maybe that happend because of the "linstor-server" vs. "linstor-controller/linstor-satellite" confusion and the according service was not started there. > Did some investigation on the linstor side and realised as a possible > problem to be the following: > > root@pve3:~# linstor r c pve2 vm-101-disk-1 > ERROR: > Description: > The default storage pool 'DfltStorPool' for resource 'vm-101-disk-1' > for volume number '0' is not deployed on node 'pve2'. > Details: > The resource which should be deployed had at least one volume > definition (volume number '0') which LinStor tried to automatically create. > The default storage pool's name for this new volume was looked for in its > volume definition's properties, its resource's properties, its node's > properties and finally in a system wide default storage pool name defined > by the LinStor controller. > Node: pve2, Resource: vm-101-disk-1 > > If I specify the '--storage-pool drbdpool' option on 'linstor r c pve2 > vm-101-disk-1' , then the resource is being assigned properly to the > cluster node. > > Could this be the problem that PVE fails as well ? No. The plugin creates the storage exactly once, after that it only creates/removes diskless assignments. When it creates the volume, it uses "--auto-place". This is clever enough to look through your pools and take the one (per node) that fits best. We assume you only have one per node in a Proxmox setup. If you have multiple, one will be chosen. If you do the resource creation manually (which you did in the second step), then LINSTOR wants to know the storage pool. But "manual assignment" and using "--auto-place" are different things. Regards, rck ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] drbd diskless
On Fri, Jul 27, 2018 at 02:39:23PM +0300, Vaggelis Papastavros wrote: > Dear friends , > > i have installed drbd 9.0.8-1 > > cat /proc/drbdversion: 9.0.8-1 (api:2/proto:86-112) > GIT-hash: c8bc36701c7c7ffc2c208f620c6d89e4ec265704 build by phil@Build64R7, > 2017-06-30 15:43:01 brrr. That is *ways* too old. This version contains many known bugs. Don't use it. Really, I mean it. > [root@sgw-01 ~]# drbdadm attach customer_vms > 0: Failure: (127) Device minor not allocated > additional info from kernel: > unknown minor > Command 'drbdsetup attach 0 /dev/sda2 /dev/sda2 internal > --on-io-error=detach' terminated with exit code 10 Looks like you started with the "attach"? What do you get on "up", or "adjust"? Regards, rck ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
[DRBD-user] drbd diskless
Dear friends , i have installed drbd 9.0.8-1 cat /proc/drbdversion: 9.0.8-1 (api:2/proto:86-112) GIT-hash: c8bc36701c7c7ffc2c208f620c6d89e4ec265704 build by phil@Build64R7, 2017-06-30 15:43:01 Transports (api:15): tcp (1.0.0) The drbd volume is a cluster resource under pacemaker and suddenly after a node reboot the state is diskless . I check the log files and no split brain situation exists . I have tried everything (disconnect, detach etc the resource) from the documentation without solution. When i try to manually attach the resource the driver responds with the following : [root@sgw-01 ~]# drbdadm attach customer_vms 0: Failure: (127) Device minor not allocated additional info from kernel: unknown minor Command 'drbdsetup attach 0 /dev/sda2 /dev/sda2 internal --on-io-error=detach' terminated with exit code 10 I can't understand the reason for the above situation and how to solve.. Can you help me ? Sincerely ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user
Re: [DRBD-user] linstor-proxmox-2.8
On Fri, Jul 27, 2018 at 01:41:54PM +0100, Yannis Milios wrote: > > - What's the difference between installing linstor-server package only > (which includes linstor-controller and linstor-satellite) and by installing > linstor-controller, linstor-satellite separately ? For Debian (we did not do the package split for rpm based distributions), you should install linstor-controller, linstor-satellite. The linstor-server package is outdated and should not be used. Never. Probably we should remove them from the repos. If you use them, you get behavior you don't want. For example I saw it created the DB in /opt/..., which should not happen. > In Linstor documentation, it is mentioned that linstor-server package > should be installed on all nodes. We have to update that. > However, in your blog post you mention > linstor-controller,linstor-satellite and linstor-client. That is what you should do. > Then later, you mention 'systemctl start linstor-server' which does not > exist if you don't install linstor-package. Mistake in the blog post. Will fix that. > If you try to install > controller,satellite and server at the same time, the installation fails > with an error in creating controller and satellite systemd units. Which of > the above is the correct approach ? Forget about the "linstor-server" package. Never ever use it (on Debian based systems). Regards, rck ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user