Re: [linux-lvm] Running thin_trim before activating a thin pool

2022-01-29 Thread Demi Marie Obenour
On Sat, Jan 29, 2022 at 10:40:34PM +0100, Zdenek Kabelac wrote:
> Dne 29. 01. 22 v 21:09 Demi Marie Obenour napsal(a):
> > On Sat, Jan 29, 2022 at 08:42:21PM +0100, Zdenek Kabelac wrote:
> > > Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a):
> > > > Is it possible to configure LVM2 so that it runs thin_trim before it
> > > > activates a thin pool?  Qubes OS currently runs blkdiscard on every thin
> > > > volume before deleting it, which is slow and unreliable.  Would running
> > > > thin_trim during system startup provide a better alternative?
> > > 
> > > Hi
> > > 
> > > 
> > > Nope there is currently no support from lvm2 side for this.
> > > Feel free to open RFE.
> > 
> > Done: https://bugzilla.redhat.com/show_bug.cgi?id=2048160
> > 
> > 
> 
> Thanks
> 
> Although your use-case Thinpool on top of VDO is not really a good plan and
> there is a good reason behind why lvm2 does not support this device stack
> directly (aka thin-pool data LV as VDO LV).
> I'd say you are stepping on very very thin ice...

Thin pool on VDO is not my actual use-case.  The actual reason for the
ticket is slow discards of thin devices that are about to be deleted;
you can find more details in the linked GitHub issue.  That said, now I
am curious why you state that dm-thin on top of dm-vdo (that is,
userspace/filesystem/VM/etc ⇒ dm-thin data (*not* metadata) ⇒ dm-vdo ⇒
hardware/dm-crypt/etc) is a bad idea.  It seems to be a decent way to
add support for efficient snapshots of data stored on a VDO volume, and
to have multiple volumes on top of a single VDO volume.  Furthermore,
https://access.redhat.com/articles/2106521#vdo recommends exactly this
use-case.  Or am I misunderstanding you?

> Also I assume you have already checked performance of discard on VDO, but I
> would not want to run this operation frequently on any larger volume...

I have never actually used VDO myself, although the documentation does
warn about this.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] LVM performance vs direct dm-thin

2022-01-29 Thread Demi Marie Obenour
On Sat, Jan 29, 2022 at 10:32:52PM +0100, Zdenek Kabelac wrote:
> Dne 29. 01. 22 v 21:34 Demi Marie Obenour napsal(a):
> > How much slower are operations on an LVM2 thin pool compared to manually
> > managing a dm-thin target via ioctls?  I am mostly concerned about
> > volume snapshot, creation, and destruction.  Data integrity is very
> > important, so taking shortcuts that risk data loss is out of the
> > question.  However, the application may have some additional information
> > that LVM2 does not have.  For instance, it may know that the volume that
> > it is snapshotting is not in use, or that a certain volume it is
> > creating will never be used after power-off.
> > 
> 
> Hi
> 
> Short answer: it depends ;)
> 
> Longer story:
> If you want to create few thins per hour - than it doesn't really matter.
> If you want to create few thins in a second - than the cost of lvm2
> management is very high  - as lvm2 does far more work then just sending a
> simple ioctl (as it's called logical volume management for a reason)

Qubes OS definitely falls into the second category.  Starting a qube
(virtual machine) generally involves creating three thins (one fresh and
two snapshots).  Furthermore, Qubes OS frequently starts qubes in
response to user actions, so thin volume creation speed directly impacts
system responsiveness.

> So brave developers may always write their own management tools for their
> constrained environment requirements that will by significantly faster in
> terms of how many thins you could create per minute (btw you will need to
> also consider dropping usage of udev on such system)

What kind of constraints are you referring to?  Is it possible and safe
to have udev running, but told to ignore the thins in question?

> It's worth to mention - the more bullet-proof you will want to make your
> project - the more closer to the extra processing made by lvm2 you will get.

Why is this?  How does lvm2 compare to stratis, for example?

> However before you will step into these waters - you should probably
> evaluate whether thin-pool actually meet your needs if you have that high
> expectation for number of supported volumes - so you will not end up with
> hyper fast snapshot creation while the actual usage then is not meeting your
> needs...

What needs are you thinking of specifically?  Qubes OS needs block
devices, so filesystem-backed storage would require the use of loop
devices unless I use ZFS zvols.  Do you have any specific
recommendations?

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] LVM RAID behavior after losing physical disk

2022-01-29 Thread John Stoffel

Andrei> I apologize for replying to my own message, I was subscribed in the
Andrei> digest mode...

No problem

Andrei> I've been hitting my head against this one for a while
Andrei> now. Originally discovered this on Ubuntu 20.04, but I'm
Andrei> seeing the same issue with RHEL 8.5. The loss of a single disk
Andrei> leaves the RAID in "partial" mode, when it should be
Andrei> "degraded".

Ouch, this isn't good.  But why aren't you using MD RAID on top of the
disks (partitioned ideally in my book) and then turn that MD device
into a PV in a VG, and then make your LVs in there?  

Andrei> I've tried to explicitly specify the number of stripes, but it
Andrei> did not make a difference. After adding the missing disk back,
Andrei> the array is healthy again. Please see below.

I'm wondering if there's something setup in the defaults for
/etc/lvm.conf which makes a degraded array fail, instead of coming up
degraded.

But honestly, if you're looking for disk level redundancy, then I'd
stronly recommend you use disks -> MD RAID6 -> LVM - > filesystem  for
your data.

It's reliable, durable and well understood.

I know there's an attraction to runnning LVs and RAID all together,
since that should be easier to manage, right?  But I think not.

Have you tried to activate the LV using:

   lvchange -ay --activationmode degraded LV
   
as a test?  What does it say?  I'm looking at the lvmraid man page for
this suggestion.


Andrei> # cat /etc/redhat-release
Andrei> Red Hat Enterprise Linux release 8.5 (Ootpa)
Andrei> # lvm version
Andrei>   LVM version: 2.03.12(2)-RHEL8 (2021-05-19)
Andrei>   Library version: 1.02.177-RHEL8 (2021-05-19)
Andrei>   Driver version:  4.43.0

Andrei> # lsblk
Andrei> NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
Andrei> sda 8:00   50G  0 disk
Andrei> ├─sda1  8:101G  0 part /boot
Andrei> └─sda2  8:20   49G  0 part
Andrei>   ├─rhel-root 253:00   44G  0 lvm  /
Andrei>   └─rhel-swap 253:105G  0 lvm  [SWAP]
Andrei> sdb 8:16   0   70G  0 disk
Andrei> sdc 8:32   0  100G  0 disk
Andrei> sdd 8:48   0  100G  0 disk
Andrei> sde 8:64   0  100G  0 disk
Andrei> sdf 8:80   0  100G  0 disk
Andrei> sdg 8:96   0  100G  0 disk

Andrei> # pvcreate /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
Andrei> # vgcreate pool_vg /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

Andrei> # lvcreate -l +100%FREE -n pool_lv --type raid6 --stripes 3
Andrei> --stripesize 1 pool_vg
Andrei>   Invalid stripe size 1.00 KiB.
Andrei>   Run `lvcreate --help' for more information.

Andrei> # lvcreate -l +100%FREE -n pool_lv --type raid6 --stripes 3
Andrei> --stripesize 4 pool_vg
Andrei>   Logical volume "pool_lv" created.

Andrei> # mkfs.xfs /dev/pool_vg/pool_lv
Andrei> # echo "/dev/mapper/pool_vg-pool_lv /mnt xfs
Andrei> defaults,x-systemd.mount-timeout=30 0 0" >> /etc/fstab
Andrei> # mount -a
Andrei> # touch /mnt/test

Andrei> Note the RAID is correctly striped across all 5 disks:

Andrei> # lvs -a -o name,lv_attr,copy_percent,health_status,devices pool_vg
Andrei>   LV Attr   Cpy%Sync Health  Devices
Andrei>   pool_lvrwi-aor--- 100.00
Andrei> 
pool_lv_rimage_0(0),pool_lv_rimage_1(0),pool_lv_rimage_2(0),pool_lv_rimage_3(0),pool_lv_rimage_4(0)
Andrei>   [pool_lv_rimage_0] iwi-aor---  /dev/sdc(1)
Andrei>   [pool_lv_rimage_1] iwi-aor---  /dev/sdd(1)
Andrei>   [pool_lv_rimage_2] iwi-aor---  /dev/sde(1)
Andrei>   [pool_lv_rimage_3] iwi-aor---  /dev/sdf(1)
Andrei>   [pool_lv_rimage_4] iwi-aor---  /dev/sdg(1)
Andrei>   [pool_lv_rmeta_0]  ewi-aor---  /dev/sdc(0)
Andrei>   [pool_lv_rmeta_1]  ewi-aor---  /dev/sdd(0)
Andrei>   [pool_lv_rmeta_2]  ewi-aor---  /dev/sde(0)
Andrei>   [pool_lv_rmeta_3]  ewi-aor---  /dev/sdf(0)
Andrei>   [pool_lv_rmeta_4]  ewi-aor---  /dev/sdg(0)

Andrei> After shutting down the OS and removing a disk, reboot drops the
Andrei> system into a single user mode because it cannot mount /mnt! The RAID
Andrei> is now in "partial" mode, when it must be just "degraded".

Andrei> # lvs -a -o name,lv_attr,copy_percent,health_status,devices pool_vg
Andrei>   WARNING: Couldn't find device with uuid
Andrei> d5y3gp-taRv-2YMa-3mR0-94ZZ-72Od-IKF8Co.
Andrei>   WARNING: VG pool_vg is missing PV
Andrei> d5y3gp-taRv-2YMa-3mR0-94ZZ-72Od-IKF8Co (last written to /dev/sdc).
Andrei>   LV Attr   Cpy%Sync Health  Devices
Andrei>   pool_lvrwi---r-p-  partial
Andrei> 
pool_lv_rimage_0(0),pool_lv_rimage_1(0),pool_lv_rimage_2(0),pool_lv_rimage_3(0),pool_lv_rimage_4(0)
Andrei>   [pool_lv_rimage_0] Iwi---r-p-  partial [unknown](1)
Andrei>   [pool_lv_rimage_1] Iwi---r---  

Re: [linux-lvm] Running thin_trim before activating a thin pool

2022-01-29 Thread Zdenek Kabelac

Dne 29. 01. 22 v 21:09 Demi Marie Obenour napsal(a):

On Sat, Jan 29, 2022 at 08:42:21PM +0100, Zdenek Kabelac wrote:

Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a):

Is it possible to configure LVM2 so that it runs thin_trim before it
activates a thin pool?  Qubes OS currently runs blkdiscard on every thin
volume before deleting it, which is slow and unreliable.  Would running
thin_trim during system startup provide a better alternative?


Hi


Nope there is currently no support from lvm2 side for this.
Feel free to open RFE.


Done: https://bugzilla.redhat.com/show_bug.cgi?id=2048160




Thanks

Although your use-case Thinpool on top of VDO is not really a good plan and 
there is a good reason behind why lvm2 does not support this device stack 
directly (aka thin-pool data LV as VDO LV).

I'd say you are stepping on very very thin ice...

Also I assume you have already checked performance of discard on VDO, but I 
would not want to run this operation frequently on any larger volume...


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] LVM performance vs direct dm-thin

2022-01-29 Thread Zdenek Kabelac

Dne 29. 01. 22 v 21:34 Demi Marie Obenour napsal(a):

How much slower are operations on an LVM2 thin pool compared to manually
managing a dm-thin target via ioctls?  I am mostly concerned about
volume snapshot, creation, and destruction.  Data integrity is very
important, so taking shortcuts that risk data loss is out of the
question.  However, the application may have some additional information
that LVM2 does not have.  For instance, it may know that the volume that
it is snapshotting is not in use, or that a certain volume it is
creating will never be used after power-off.



Hi

Short answer: it depends ;)

Longer story:
If you want to create few thins per hour - than it doesn't really matter.
If you want to create few thins in a second - than the cost of lvm2 management 
is very high  - as lvm2 does far more work then just sending a simple ioctl 
(as it's called logical volume management for a reason)


So brave developers may always write their own management tools for their 
constrained environment requirements that will by significantly faster in 
terms of how many thins you could create per minute (btw you will need to also 
consider dropping usage of udev on such system)


It's worth to mention - the more bullet-proof you will want to make your 
project - the more closer to the extra processing made by lvm2 you will get.


However before you will step into these waters - you should probably evaluate 
whether thin-pool actually meet your needs if you have that high expectation 
for number of supported volumes - so you will not end up with hyper fast 
snapshot creation while the actual usage then is not meeting your needs...


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] LVM performance vs direct dm-thin

2022-01-29 Thread Demi Marie Obenour
How much slower are operations on an LVM2 thin pool compared to manually
managing a dm-thin target via ioctls?  I am mostly concerned about
volume snapshot, creation, and destruction.  Data integrity is very
important, so taking shortcuts that risk data loss is out of the
question.  However, the application may have some additional information
that LVM2 does not have.  For instance, it may know that the volume that
it is snapshotting is not in use, or that a certain volume it is
creating will never be used after power-off.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Running thin_trim before activating a thin pool

2022-01-29 Thread Demi Marie Obenour
On Sat, Jan 29, 2022 at 08:42:21PM +0100, Zdenek Kabelac wrote:
> Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a):
> > Is it possible to configure LVM2 so that it runs thin_trim before it
> > activates a thin pool?  Qubes OS currently runs blkdiscard on every thin
> > volume before deleting it, which is slow and unreliable.  Would running
> > thin_trim during system startup provide a better alternative?
> 
> Hi
> 
> 
> Nope there is currently no support from lvm2 side for this.
> Feel free to open RFE.

Done: https://bugzilla.redhat.com/show_bug.cgi?id=2048160

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Running thin_trim before activating a thin pool

2022-01-29 Thread Zdenek Kabelac

Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a):

Is it possible to configure LVM2 so that it runs thin_trim before it
activates a thin pool?  Qubes OS currently runs blkdiscard on every thin
volume before deleting it, which is slow and unreliable.  Would running
thin_trim during system startup provide a better alternative?


Hi


Nope there is currently no support from lvm2 side for this.
Feel free to open RFE.

I guess this would possibly justify some form of support for 'writable' 
component activation.


Regards

Zdenek

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] Running thin_trim before activating a thin pool

2022-01-29 Thread Demi Marie Obenour
Is it possible to configure LVM2 so that it runs thin_trim before it
activates a thin pool?  Qubes OS currently runs blkdiscard on every thin
volume before deleting it, which is slow and unreliable.  Would running
thin_trim during system startup provide a better alternative?
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/