Re: [DRBD-user] drbd+lvm no bueno

2018-07-30 Thread Eric Robinson
> > > > Lars,
> > > >
> > > > I put MySQL databases on the drbd volume. To back them up, I pause
> > > > them and do LVM snapshots (then rsync the snapshots to an archive
> > > > server). How could I do that with LVM below drbd, since what I
> > > > want is a snapshot of the filesystem where MySQL lives?
> > >
> > > You just snapshot below DRBD, after "quiescen" the mysql db.
> > >
> > > DRBD is transparent, the "garbage" (to the filesystem) of the
> > > "trailing drbd meta data" is of no concern.
> > > You may have to "mount -t ext4" (or xfs or whatever), if your mount
> > > and libblkid decide that this was a "drbd" type and could not be
> > > mounted. They are just trying to help, really.
> > > which is good. but in that case they get it wrong.
> >
> > Okay, just so I understand
> >
> > Suppose I turn md4 into a PV and create one volume group
> > 'vg_under_drbd0', and logical volume 'lv_under_drbd0' that takes 95%
> > of the space, leaving 5% for snapshots.
> >
> > Then I create my ext4 filesystem directly on drbd0.
> >
> > At backup time, I quiesce the MySQL instances and create a snapshot of
> > the drbd disk.
> >
> > I can then mount the drbd snapshot as a filesystem?
> 
> Yes.
> Though obviously, those snapshots won't "failover", in case you have a node
> failure and failover during the backup.
> Snapshots in a VG "on top of" DRBD do failover.

Your advice (and Veit's) was spot on. I rebuilt everything with LVM under drbd 
instead of over it, added the appropriate filter in lvm.conf, and rebuilt my 
initramfs, and everything is working great. Failover works as expected without 
volume activation problems, and I could snapshot the drbd volume and mount it 
as a filesystem. You all hit it out of the park. Thanks!


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-30 Thread Lars Ellenberg
On Fri, Jul 27, 2018 at 10:45:46AM +, Eric Robinson wrote:
> > > Lars,
> > >
> > > I put MySQL databases on the drbd volume. To back them up, I pause
> > > them and do LVM snapshots (then rsync the snapshots to an archive
> > > server). How could I do that with LVM below drbd, since what I want is
> > > a snapshot of the filesystem where MySQL lives?
> > 
> > You just snapshot below DRBD, after "quiescen" the mysql db.
> > 
> > DRBD is transparent, the "garbage" (to the filesystem) of the "trailing drbd
> > meta data" is of no concern.
> > You may have to "mount -t ext4" (or xfs or whatever), if your mount and
> > libblkid decide that this was a "drbd" type and could not be mounted. They 
> > are
> > just trying to help, really.
> > which is good. but in that case they get it wrong.
> 
> Okay, just so I understand
> 
> Suppose I turn md4 into a PV and create one volume group
> 'vg_under_drbd0', and logical volume 'lv_under_drbd0' that takes 95%
> of the space, leaving 5% for snapshots.
> 
> Then I create my ext4 filesystem directly on drbd0.
> 
> At backup time, I quiesce the MySQL instances and create a snapshot of
> the drbd disk.
> 
> I can then mount the drbd snapshot as a filesystem?   

Yes.
Though obviously, those snapshots won't "failover",
in case you have a node failure and failover during the backup.
Snapshots in a VG "on top of" DRBD do failover.

> > > How severely does putting LVM on top of drbd affect performance?
> > 
> > It's not the "putting LVM on top of drbd" part.
> > it's what most people think when doing that:
> > use a huge single DRBD as PV, and put loads of unrelated LVS inside of that.
> > 
> > Which then all share the single DRBD "activity log" of the single DRBD 
> > volume,
> > which then becomes a bottleneck for IOPS.
> > 
> 
> I currently have one big drbd disk with one volume group over it and
> one logical volume that takes up 95% of the space, leaving 5% of the
> volume group for snapshots. I run multiple instances of MySQL out of
> different directories. I don't see a way to avoid the activity log
> bottleneck problem.

One LV -> DRBD Volume -> Filesystem per DB instance.
If the DBs are "logically related", have all volumes in one DRBD
resource. If not, separate DRBD resources, one volume each.

But whether or not that would help in your setup depends very much on
the typical size of the changing "working set" of the DBs.

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-28 Thread Igor Cicimov
On Sun, 29 Jul 2018 1:13 am Eric Robinson  wrote:

>
>
>
>
> > -Original Message-
> > From: Eric Robinson
> > Sent: Saturday, July 28, 2018 7:39 AM
> > To: Lars Ellenberg ;
> drbd-user@lists.linbit.com
> > Subject: RE: [DRBD-user] drbd+lvm no bueno
> >
> > > > > Lars,
> > > > >
> > > > > I put MySQL databases on the drbd volume. To back them up, I pause
> > > > > them and do LVM snapshots (then rsync the snapshots to an archive
> > > > > server). How could I do that with LVM below drbd, since what I
> > > > > want is a snapshot of the filesystem where MySQL lives?
> > > >
> > > > You just snapshot below DRBD, after "quiescen" the mysql db.
> > > >
> > > > DRBD is transparent, the "garbage" (to the filesystem) of the
> > > > "trailing drbd meta data" is of no concern.
> > > > You may have to "mount -t ext4" (or xfs or whatever), if your mount
> > > > and libblkid decide that this was a "drbd" type and could not be
> > > > mounted. They are just trying to help, really.
> > > > which is good. but in that case they get it wrong.
> > >
> > > Okay, just so I understand
> > >
> > > Suppose I turn md4 into a PV and create one volume group
> > > 'vg_under_drbd0', and logical volume 'lv_under_drbd0' that takes 95%
> > > of the space, leaving 5% for snapshots.
> > >
> > > Then I create my ext4 filesystem directly on drbd0.
> > >
> > > At backup time, I quiesce the MySQL instances and create a snapshot of
> > > the drbd disk.
> > >
> > > I can then mount the drbd snapshot as a filesystem?
> > >
> >
> > Disregard question. I tested it. Works fine. Mind blown.
> >
> > -Eric
> >
>
> Although I discovered quite by accident that you can mount a snapshot over
> the top of the filesystem that exists on the device that it's a snapshot
> of. Wouldn't this create some sort of recursive write death spiral?
>

I think this can help properly understanding lvm snapshots and answer your
question https://www.clevernetsystems.com/lvm-snapshots-explained/


> Check it out...
>
> root@001db01a /]# lvdisplay
>   --- Logical volume ---
>   LV Path/dev/vg_under_drbd1/lv_under_drbd1
>   LV Namelv_under_drbd1
>   VG Namevg_under_drbd1
>   LV UUIDLWWPiL-Y6nR-cNnW-j2E9-LAK9-UsXm-3inTyJ
>   LV Write Accessread/write
>   LV Creation host, time 001db01a, 2018-07-28 04:53:14 +
>   LV Status  available
>   # open 2
>   LV Size1.40 TiB
>   Current LE 367002
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 8192
>   Block device   253:1
>
>   --- Logical volume ---
>   LV Path/dev/vg_under_drbd0/lv_under_drbd0
>   LV Namelv_under_drbd0
>   VG Namevg_under_drbd0
>   LV UUIDM2oMNd-hots-d9Pf-KQG8-YPqh-6x3a-r6wBqo
>   LV Write Accessread/write
>   LV Creation host, time 001db01a, 2018-07-28 04:52:59 +
>   LV Status  available
>   # open 2
>   LV Size1.40 TiB
>   Current LE 367002
>   Segments   1
>   Allocation inherit
>   Read ahead sectors auto
>   - currently set to 8192
>   Block device   253:0
>
> [root@001db01a /]# df -h
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda230G  3.3G   27G  12% /
> devtmpfs 63G 0   63G   0% /dev
> tmpfs63G 0   63G   0% /dev/shm
> tmpfs63G  9.0M   63G   1% /run
> tmpfs63G 0   63G   0% /sys/fs/cgroup
> /dev/sda1   497M   78M  420M  16% /boot
> /dev/sdb1   252G   61M  239G   1% /mnt/resource
> tmpfs13G 0   13G   0% /run/user/0
> /dev/drbd0  1.4T  2.1G  1.4T   1% /ha01_mysql
> [root@001db01a /]#
> [root@001db01a /]# ls /ha01_mysql
> lost+found  testfile
> [root@001db01a /]#
> [root@001db01a /]# lvcreate -s -L30G -n drbd0_snapshot
> /dev/vg_under_drbd0/lv_under_drbd0
>   Logical volume "drbd0_snapshot" created.
> [root@001db01a /]#
> [root@001db01a /]# mount /dev/vg_under_drbd0/drbd0_snapshot /ha01_mysql
> [root@001db01a /]#
> [root@001db01a /]# cd /ha01_mysql
> [root@001db01a ha01_mysql]# ls
> lost+found  testfile
> [root@001db01a ha01_mysql]# echo blah > blah.txt
> [ro

Re: [DRBD-user] drbd+lvm no bueno

2018-07-28 Thread Eric Robinson





> -Original Message-
> From: Eric Robinson
> Sent: Saturday, July 28, 2018 7:39 AM
> To: Lars Ellenberg ; drbd-user@lists.linbit.com
> Subject: RE: [DRBD-user] drbd+lvm no bueno
> 
> > > > Lars,
> > > >
> > > > I put MySQL databases on the drbd volume. To back them up, I pause
> > > > them and do LVM snapshots (then rsync the snapshots to an archive
> > > > server). How could I do that with LVM below drbd, since what I
> > > > want is a snapshot of the filesystem where MySQL lives?
> > >
> > > You just snapshot below DRBD, after "quiescen" the mysql db.
> > >
> > > DRBD is transparent, the "garbage" (to the filesystem) of the
> > > "trailing drbd meta data" is of no concern.
> > > You may have to "mount -t ext4" (or xfs or whatever), if your mount
> > > and libblkid decide that this was a "drbd" type and could not be
> > > mounted. They are just trying to help, really.
> > > which is good. but in that case they get it wrong.
> >
> > Okay, just so I understand
> >
> > Suppose I turn md4 into a PV and create one volume group
> > 'vg_under_drbd0', and logical volume 'lv_under_drbd0' that takes 95%
> > of the space, leaving 5% for snapshots.
> >
> > Then I create my ext4 filesystem directly on drbd0.
> >
> > At backup time, I quiesce the MySQL instances and create a snapshot of
> > the drbd disk.
> >
> > I can then mount the drbd snapshot as a filesystem?
> >
> 
> Disregard question. I tested it. Works fine. Mind blown.
> 
> -Eric
> 

Although I discovered quite by accident that you can mount a snapshot over the 
top of the filesystem that exists on the device that it's a snapshot of. 
Wouldn't this create some sort of recursive write death spiral?

Check it out...

root@001db01a /]# lvdisplay
  --- Logical volume ---
  LV Path/dev/vg_under_drbd1/lv_under_drbd1
  LV Namelv_under_drbd1
  VG Namevg_under_drbd1
  LV UUIDLWWPiL-Y6nR-cNnW-j2E9-LAK9-UsXm-3inTyJ
  LV Write Accessread/write
  LV Creation host, time 001db01a, 2018-07-28 04:53:14 +
  LV Status  available
  # open 2
  LV Size1.40 TiB
  Current LE 367002
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 8192
  Block device   253:1

  --- Logical volume ---
  LV Path/dev/vg_under_drbd0/lv_under_drbd0
  LV Namelv_under_drbd0
  VG Namevg_under_drbd0
  LV UUIDM2oMNd-hots-d9Pf-KQG8-YPqh-6x3a-r6wBqo
  LV Write Accessread/write
  LV Creation host, time 001db01a, 2018-07-28 04:52:59 +
  LV Status  available
  # open 2
  LV Size1.40 TiB
  Current LE 367002
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 8192
  Block device   253:0

[root@001db01a /]# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda230G  3.3G   27G  12% /
devtmpfs 63G 0   63G   0% /dev
tmpfs63G 0   63G   0% /dev/shm
tmpfs63G  9.0M   63G   1% /run
tmpfs63G 0   63G   0% /sys/fs/cgroup
/dev/sda1   497M   78M  420M  16% /boot
/dev/sdb1   252G   61M  239G   1% /mnt/resource
tmpfs13G 0   13G   0% /run/user/0
/dev/drbd0  1.4T  2.1G  1.4T   1% /ha01_mysql
[root@001db01a /]#
[root@001db01a /]# ls /ha01_mysql
lost+found  testfile
[root@001db01a /]#
[root@001db01a /]# lvcreate -s -L30G -n drbd0_snapshot 
/dev/vg_under_drbd0/lv_under_drbd0
  Logical volume "drbd0_snapshot" created.
[root@001db01a /]#
[root@001db01a /]# mount /dev/vg_under_drbd0/drbd0_snapshot /ha01_mysql
[root@001db01a /]#
[root@001db01a /]# cd /ha01_mysql
[root@001db01a ha01_mysql]# ls
lost+found  testfile
[root@001db01a ha01_mysql]# echo blah > blah.txt
[root@001db01a ha01_mysql]# ll
total 2097172
-rw-r--r--. 1 root root  5 Jul 28 14:50 blah.txt
drwx--. 2 root root  16384 Jul 28 14:10 lost+found
-rw-r--r--. 1 root root 2147479552 Jul 28 14:20 testfile
[root@001db01a ha01_mysql]# cd /
 [root@001db01a /]# umount /ha01_mysql
[root@001db01a /]# ls /ha01_mysql
lost+found  testfile
[root@001db01a /]#

What? I know nothing.

--Eric
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-28 Thread Eric Robinson
> > > Lars,
> > >
> > > I put MySQL databases on the drbd volume. To back them up, I pause
> > > them and do LVM snapshots (then rsync the snapshots to an archive
> > > server). How could I do that with LVM below drbd, since what I want
> > > is a snapshot of the filesystem where MySQL lives?
> >
> > You just snapshot below DRBD, after "quiescen" the mysql db.
> >
> > DRBD is transparent, the "garbage" (to the filesystem) of the
> > "trailing drbd meta data" is of no concern.
> > You may have to "mount -t ext4" (or xfs or whatever), if your mount
> > and libblkid decide that this was a "drbd" type and could not be
> > mounted. They are just trying to help, really.
> > which is good. but in that case they get it wrong.
> 
> Okay, just so I understand
> 
> Suppose I turn md4 into a PV and create one volume group 'vg_under_drbd0',
> and logical volume 'lv_under_drbd0' that takes 95% of the space, leaving 5%
> for snapshots.
> 
> Then I create my ext4 filesystem directly on drbd0.
> 
> At backup time, I quiesce the MySQL instances and create a snapshot of the
> drbd disk.
> 
> I can then mount the drbd snapshot as a filesystem?
> 

Disregard question. I tested it. Works fine. Mind blown.

-Eric


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-27 Thread Eric Robinson
> > Lars,
> >
> > I put MySQL databases on the drbd volume. To back them up, I pause
> > them and do LVM snapshots (then rsync the snapshots to an archive
> > server). How could I do that with LVM below drbd, since what I want is
> > a snapshot of the filesystem where MySQL lives?
> 
> You just snapshot below DRBD, after "quiescen" the mysql db.
> 
> DRBD is transparent, the "garbage" (to the filesystem) of the "trailing drbd
> meta data" is of no concern.
> You may have to "mount -t ext4" (or xfs or whatever), if your mount and
> libblkid decide that this was a "drbd" type and could not be mounted. They are
> just trying to help, really.
> which is good. but in that case they get it wrong.

Okay, just so I understand

Suppose I turn md4 into a PV and create one volume group 'vg_under_drbd0', and 
logical volume 'lv_under_drbd0' that takes 95% of the space, leaving 5% for 
snapshots.

Then I create my ext4 filesystem directly on drbd0.

At backup time, I quiesce the MySQL instances and create a snapshot of the drbd 
disk.

I can then mount the drbd snapshot as a filesystem?   
 
> 
> > How severely does putting LVM on top of drbd affect performance?
> 
> It's not the "putting LVM on top of drbd" part.
> it's what most people think when doing that:
> use a huge single DRBD as PV, and put loads of unrelated LVS inside of that.
> 
> Which then all share the single DRBD "activity log" of the single DRBD volume,
> which then becomes a bottleneck for IOPS.
> 

I currently have one big drbd disk with one volume group over it and one 
logical volume that takes up 95% of the space, leaving 5% of the volume group 
for snapshots. I run multiple instances of MySQL out of different directories. 
I don't see a way to avoid the activity log bottleneck problem.


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Lars Ellenberg
On Thu, Jul 26, 2018 at 05:51:40PM +, Eric Robinson wrote:
> > But really, most of the time, you really want LVM *below* DRBD, and NOT
> > above it. Even though it may "appear" to be convenient, it is usually not 
> > what
> > you want, for various reasons, one of it being performance.
> 
> Lars,
> 
> I put MySQL databases on the drbd volume. To back them up, I pause
> them and do LVM snapshots (then rsync the snapshots to an archive
> server). How could I do that with LVM below drbd, since what I want is
> a snapshot of the filesystem where MySQL lives?

You just snapshot below DRBD, after "quiescen" the mysql db.

DRBD is transparent, the "garbage" (to the filesystem) of the "trailing
drbd meta data" is of no concern.
You may have to "mount -t ext4" (or xfs or whatever),
if your mount and libblkid decide that this was a "drbd" type
and could not be mounted. They are just trying to help, really.
which is good. but in that case they get it wrong.

> How severely does putting LVM on top of drbd affect performance?  

It's not the "putting LVM on top of drbd" part.
it's what most people think when doing that:
use a huge single DRBD as PV, and put loads of unrelated LVS
inside of that.

Which then all share the single DRBD "activity log" of the single DRBD
volume, which then becomes a bottleneck for IOPS.

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Igor Cicimov
On Fri, Jul 27, 2018 at 3:51 AM, Eric Robinson 
wrote:

> > On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> > > Hi Eric,
> > >
> > > Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> > > > Would there really be a PV signature on the backing device? I didn't
> > > > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn
> > > > the drbd disk into one (pvcreate /dev/drbd1).
> >
> > Yes (please view in fixed with font):
> >
> > | PV signature | VG extent pool |
> > | drbd1     | drbd metadata |  md4  
> > |     ...|md metadata|
> > |component|drives|.|.|...of...|md4..|.|.|
> >
> > > both DRBD and mdraid put their metadata at the end of the block
> > > device, thus depending on LVM configuration, both mdraid backing
> > > devices as well as DRBD minors bcking VM disks with direct-on-disk PVs
> > > might be detected as PVs.
> > >
> > > It is very advisable to set lvm.conf's global_filter to allow only the
> > > desired devices as PVs by matching a strict regexp, and to ignore all
> > > other devices, e.g.:
> > >
> > >  global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> > >
> > > or even more strict:
> > >
> > >  global_filter = [ "a|^/dev/md4$|", "r/.*/" ]
> >
> > Uhm, no.
> > Not if he want DRBD to be his PV...
> > then he needs to exclude (reject) the backend, and only include (accept)
> the
> > DRBD.
> >
> > But yes, I very much recommend to put an explicit white list of the
> to-be-
> > used PVs into the global filter, and reject anything else.
> >
> > Note that these are (by default unanchored) regexes, NOT glob patterns.
> > (Above examples get that one right, though r/./ would be enough...
> > but I have seen people get it wrong too many times, so I thought I'd
> mention
> > it here again)
> >
> > > After editing the configuration, you might want to regenerate your
> > > distro's initrd/initramfs to reflect the changes directly at startup.
> >
> > Yes, don't forget that step ^^^ that one is important as well.
> >
> > But really, most of the time, you really want LVM *below* DRBD, and NOT
> > above it. Even though it may "appear" to be convenient, it is usually
> not what
> > you want, for various reasons, one of it being performance.
>
> Lars,
>
> I put MySQL databases on the drbd volume. To back them up, I pause them
> and do LVM snapshots (then rsync the snapshots to an archive server). How
> could I do that with LVM below drbd, since what I want is a snapshot of the
> filesystem where MySQL lives?
>
> How severely does putting LVM on top of drbd affect performance?
>
> >
> > Cheers,
> >
> > --
> > : Lars Ellenberg


​It depends I would say it is not unusual to end up with a setup where dbrd
is sandwiched between top and bottom lvm due to requirements or
convenience. For example in case of master-master with GFS2:

iscsi,raid -> lvm -> drbd -> clvm -> gfs2​

​Apart from the clustered lvm on top of drbd (which is RedHat recommended)
you also get the benefit of easily extending the drbd device(s) due to the
underlying lvm.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Veit Wahlich
Am Donnerstag, den 26.07.2018, 17:31 +0200 schrieb Lars Ellenberg:
> >  global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> > 
> > or even more strict: 
> > 
> >  global_filter = [ "a|^/dev/md4$|", "r/.*/" ]
> 
> Uhm, no.
> Not if he want DRBD to be his PV...
> then he needs to exclude (reject) the backend,
> and only include (accept) the DRBD.

Ah yes, sorry. In my mind Eric used LVM below DRBD, just like you
recommended:

> But really, most of the time, you really want LVM *below* DRBD,

Regards,
// Veit

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Lars Ellenberg
On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> Hi Eric,
> 
> Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> > Would there really be a PV signature on the backing device? I didn't
> > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn
> > the drbd disk into one (pvcreate /dev/drbd1).

Yes (please view in fixed with font):

| PV signature | VG extent pool |
| drbd1     | drbd metadata |
| md4        ...|md metadata|
|component|drives|.|.|...of...|md4..|.|.|

> both DRBD and mdraid put their metadata at the end of the block device,
> thus depending on LVM configuration, both mdraid backing devices as well
> as DRBD minors bcking VM disks with direct-on-disk PVs might be detected
> as PVs.
> 
> It is very advisable to set lvm.conf's global_filter to allow only the
> desired devices as PVs by matching a strict regexp, and to ignore all
> other devices, e.g.:
> 
>  global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> 
> or even more strict: 
> 
>  global_filter = [ "a|^/dev/md4$|", "r/.*/" ]

Uhm, no.
Not if he want DRBD to be his PV...
then he needs to exclude (reject) the backend,
and only include (accept) the DRBD.

But yes, I very much recommend to put an explicit white list
of the to-be-used PVs into the global filter, and reject anything else.

Note that these are (by default unanchored) regexes, NOT glob patterns.
(Above examples get that one right, though r/./ would be enough...
but I have seen people get it wrong too many times, so I thought I'd
mention it here again)

> After editing the configuration, you might want to regenerate your
> distro's initrd/initramfs to reflect the changes directly at startup.

Yes, don't forget that step ^^^ that one is important as well.

But really, most of the time, you really want LVM *below* DRBD,
and NOT above it. Even though it may "appear" to be convenient,
it is usually not what you want, for various reasons,
one of it being performance.

Cheers,

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Veit Wahlich
Hi Eric,

Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> Would there really be a PV signature on the backing device? I didn't turn md4 
> into a PV (did not run pvcreate /dev/md4), but I did turn the drbd disk into 
> one (pvcreate /dev/drbd1).

both DRBD and mdraid put their metadata at the end of the block device,
thus depending on LVM configuration, both mdraid backing devices as well
as DRBD minors bcking VM disks with direct-on-disk PVs might be detected
as PVs.

It is very advisable to set lvm.conf's global_filter to allow only the
desired devices as PVs by matching a strict regexp, and to ignore all
other devices, e.g.:

 global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]

or even more strict: 

 global_filter = [ "a|^/dev/md4$|", "r/.*/" ]

After editing the configuration, you might want to regenerate your
distro's initrd/initramfs to reflect the changes directly at startup.

Best regards,
// Veit

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson


> -Original Message-
> From: drbd-user-boun...@lists.linbit.com [mailto:drbd-user-
> boun...@lists.linbit.com] On Behalf Of Robert Altnoeder
> Sent: Thursday, July 26, 2018 5:12 AM
> To: drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] drbd+lvm no bueno
> 
> On 07/26/2018 08:50 AM, Eric Robinson wrote:
> >
> >
> > Failed Actions:
> >
> > * p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68,
> > status=complete, exitreason='LVM: vg_on_drbd1 did not activate
> > correctly',
> >
> >     last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms
> >
> >
> >
> > The storage stack is:
> >
> >
> >
> > md4 -> drbd -> lvm -> filesystem
> >
> 
> This is most probably an LVM configuration error. Any LVM volume group on
> top of DRBD must be deactivated/stopped whenever DRBD is Secondary and
> must be started whenever DRBD is Primary, and LVM must be prevented from
> finding and using the storage device that DRBD uses as its backend, which it
> would normally do, because it can see the LVM physical volume signature not
> only on the DRBD device, but also on the backing device that DRBD uses.
> 

Would there really be a PV signature on the backing device? I didn't turn md4 
into a PV (did not run pvcreate /dev/md4), but I did turn the drbd disk into 
one (pvcreate /dev/drbd1). 

-Eric 
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson
Thank you, I will check that out.

From: Jaco van Niekerk [mailto:j...@desktop.co.za]
Sent: Thursday, July 26, 2018 3:34 AM
To: Eric Robinson ; drbd-user@lists.linbit.com
Subject: Re: [DRBD-user] drbd+lvm no bueno


Hi

Check your LVM configuration:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-exclusiveactive-haaa

Regards

Jaco van Niekerk

Office:   011 608 2663  E-mail:  j...@desktop.co.za<mailto:j...@desktop.co.za>
[Desktop]
accepts no liability for the content of this email, or for the consequences of 
any actions taken on the basis of the information provided, unless that 
information is subsequently confirmed in writing. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or taking 
any action in reliance on the contents of this information is strictly 
prohibited.

Disclaimer added by CodeTwo Exchange Rules 2010
www.codetwo.com<http://www.codetwo.com/?sts=1048>
On 26/07/2018 11:35, Eric Robinson wrote:
Using drbd 9.0.14, I am having trouble getting rtesources to move between 
nodes. I get...

Failed Actions:
* p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete, 
exitreason='LVM: vg_on_drbd1 did not activate correctly',
last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms

The storage stack is:

md4 -> drbd -> lvm -> filesystem

--Eric

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Robert Altnoeder
On 07/26/2018 08:50 AM, Eric Robinson wrote:
>
>
> Failed Actions:
>
> * p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68,
> status=complete, exitreason='LVM: vg_on_drbd1 did not activate correctly',
>
>     last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms
>
>  
>
> The storage stack is:
>
>  
>
> md4 -> drbd -> lvm -> filesystem
>

This is most probably an LVM configuration error. Any LVM volume group
on top of DRBD must be deactivated/stopped whenever DRBD is Secondary
and must be started whenever DRBD is Primary, and LVM must be prevented
from finding and using the storage device that DRBD uses as its backend,
which it would normally do, because it can see the LVM physical volume
signature not only on the DRBD device, but also on the backing device
that DRBD uses.

br,
Robert

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Jaco van Niekerk
Hi

Check your LVM configuration:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-exclusiveactive-haaa

Regards

Jaco van Niekerk

Office:   011 608 2663  E-mail:  j...@desktop.co.za
[Desktop]
accepts no liability for the content of this email, or for the consequences of 
any actions taken on the basis of the information provided, unless that 
information is subsequently confirmed in writing. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or taking 
any action in reliance on the contents of this information is strictly 
prohibited.

Disclaimer added by CodeTwo Exchange Rules 2010
www.codetwo.com
On 26/07/2018 11:35, Eric Robinson wrote:
Using drbd 9.0.14, I am having trouble getting rtesources to move between 
nodes. I get…

Failed Actions:
* p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete, 
exitreason='LVM: vg_on_drbd1 did not activate correctly',
last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms

The storage stack is:

md4 -> drbd -> lvm -> filesystem

--Eric

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson
Using drbd 9.0.14, I am having trouble getting rtesources to move between 
nodes. I get...

Failed Actions:
* p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete, 
exitreason='LVM: vg_on_drbd1 did not activate correctly',
last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms

The storage stack is:

md4 -> drbd -> lvm -> filesystem

--Eric
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user