Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-03 Thread Zhiyong Ye

Hi Gathman,

Thank you so much for sharing your usage scenario, and I learn a lot 
from your experience.


Regards!

Zhiyong

在 11/2/22 2:08 AM, Stuart D Gathman 写道:

Z> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
I want to implement live migration of VMs in the lvm + lvmlockd + 
sanlock
environment. There are multiple hosts in the cluster using the same 
iscsi

connection, and the VMs are running on this environment using thinlv
volumes. But if want to live migrate the vm, it will be difficult since
thinlv which from the same thin pool can only be exclusive active on one
host.


I just expose the LV (thin or not - I prefer not) as an iSCSI target
that the VM boots from.  There is only one host that manages a thin 
pool, and that is a single point of failure, but no locking issues.  You

issue the LVM commands on the iSCSI server (which I guess they call NAS
these days).

If you need a way for a VM to request enlarging an LV it accesses, or
similar interaction, I would make a simple API where each VM gets a
token that determines what LVs it has access to and how much total
storage it can consume.  Maybe someone has already done that.
I just issue the commands on the LVM/NAS/iSCSI host.

I haven't done this, but there can be more than one thin pool, each on
it's own NAS/iSCSI server.  So if one storage server crashes, then
only the VMs attached to it crash.  You can only (simply) migrate a VM 
to another VM host on the same storage server.


BUT, you can migrate a VM to another host less instantly using DRBD
or other remote mirroring driver.  I have done this.  You get the
remote LV mirror mostly synced, suspend the VM (to a file if you need
to rsync that to the remote), finish the sync of the LV(s), resume the
VM on the new server - in another city.  Handy when you have a few hours
notice of a natural disaster (hurricane/flood).


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-03 Thread Zhiyong Ye



在 11/2/22 1:57 AM, David Teigland 写道:

On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:

Hi Dave,

Thank you for your reply!

Does this mean that there is no way to live migrate VMs when using lvmlockd?


You could by using linear LVs, ovirt does this using sanlock directly,
since lvmlockd arrived later.



Yes, standard LV is theoretically capable of live migration because it 
supports multiple hosts using the same LV concurrently with a shared 
lock (lvchange -asy). But I want to support the live migration feature 
for both LVs (thin LV and standard LV).



As you describe, the granularity of thinlv's sharing/unsharing is per
read/write IO, except that lvmlockd reinforces this limitation for the lvm
activation command.

Is it possible to modify the code of lvmlockd to break this limitation and
let libvirt/qemu guarantee the mutual exclusivity of each read/write IO
across hosts when live migration?


lvmlockd locking does not apply to the dm i/o layers.  The kind of
multi-host locking that you seem to be talking about would need to be
implemented inside dm-thin to protect on-disk data structures that it
modifies.  In reality you would need to write a new dm target with locking
and data structures designed for that kind of sharing.


I can try to write a new dm thin target or make some modifications based 
on the existing dm-thin target to support this feature, if it is 
technically feasible. But I'm curious why the current dm-thin doesn't 
support multi-host shared access, just like dm-linear does.


Regards!

Zhiyong

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-03 Thread Zhiyong Ye


Hi Demi,

Thank you for your reply!

Using thin provisioning on the storage server (SAN) side would make the 
problem much easier, but my scenario is to support different types of 
SAN, which means that the SAN may not support this feature.


在 11/2/22 2:15 AM, Demi Marie Obenour 写道:

On Tue, Nov 01, 2022 at 12:57:56PM -0500, David Teigland wrote:

On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:

Hi Dave,

Thank you for your reply!

Does this mean that there is no way to live migrate VMs when using lvmlockd?


You could by using linear LVs, ovirt does this using sanlock directly,
since lvmlockd arrived later.


Another approach would be to use thin provisioning on the SAN instead of
at the LVM level.


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-02 Thread Zhiyong Ye

Hi Dave,

Thank you for your reply!

Does this mean that there is no way to live migrate VMs when using lvmlockd?

As you describe, the granularity of thinlv's sharing/unsharing is per 
read/write IO, except that lvmlockd reinforces this limitation for the 
lvm activation command.


Is it possible to modify the code of lvmlockd to break this limitation 
and let libvirt/qemu guarantee the mutual exclusivity of each read/write 
IO across hosts when live migration?


Thanks!

Zhiyong

在 11/1/22 10:42 PM, David Teigland 写道:

On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:

Hi all,

I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
environment. There are multiple hosts in the cluster using the same iscsi
connection, and the VMs are running on this environment using thinlv
volumes. But if want to live migrate the vm, it will be difficult since
thinlv which from the same thin pool can only be exclusive active on one
host.

I found a previous subject that discussed this issue:

https://lore.kernel.org/all/20180305165926.ga20...@redhat.com/


Hi, in that email I tried to point out that the real problem is not the
locking, but rather the inability of dm-thin to share a thin pool among
multiple hosts.  The locking restrictions just reflect that technical
limitation.

Dave



___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-01 Thread Stuart D Gathman

Z> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:

I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
environment. There are multiple hosts in the cluster using the same iscsi
connection, and the VMs are running on this environment using thinlv
volumes. But if want to live migrate the vm, it will be difficult since
thinlv which from the same thin pool can only be exclusive active on one
host.


I just expose the LV (thin or not - I prefer not) as an iSCSI target
that the VM boots from.  There is only one host that manages a thin pool, 
and that is a single point of failure, but no locking issues.  You

issue the LVM commands on the iSCSI server (which I guess they call NAS
these days).

If you need a way for a VM to request enlarging an LV it accesses, or
similar interaction, I would make a simple API where each VM gets a
token that determines what LVs it has access to and how much total
storage it can consume.  Maybe someone has already done that.
I just issue the commands on the LVM/NAS/iSCSI host.

I haven't done this, but there can be more than one thin pool, each on
it's own NAS/iSCSI server.  So if one storage server crashes, then
only the VMs attached to it crash.  You can only (simply) migrate a VM 
to another VM host on the same storage server.


BUT, you can migrate a VM to another host less instantly using DRBD
or other remote mirroring driver.  I have done this.  You get the
remote LV mirror mostly synced, suspend the VM (to a file if you need
to rsync that to the remote), finish the sync of the LV(s), resume the
VM on the new server - in another city.  Handy when you have a few hours
notice of a natural disaster (hurricane/flood).

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-01 Thread Demi Marie Obenour
On Tue, Nov 01, 2022 at 12:57:56PM -0500, David Teigland wrote:
> On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:
> > Hi Dave,
> > 
> > Thank you for your reply!
> > 
> > Does this mean that there is no way to live migrate VMs when using lvmlockd?
> 
> You could by using linear LVs, ovirt does this using sanlock directly,
> since lvmlockd arrived later.

Another approach would be to use thin provisioning on the SAN instead of
at the LVM level.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab


signature.asc
Description: PGP signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-01 Thread David Teigland
On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:
> Hi Dave,
> 
> Thank you for your reply!
> 
> Does this mean that there is no way to live migrate VMs when using lvmlockd?

You could by using linear LVs, ovirt does this using sanlock directly,
since lvmlockd arrived later.

> As you describe, the granularity of thinlv's sharing/unsharing is per
> read/write IO, except that lvmlockd reinforces this limitation for the lvm
> activation command.
> 
> Is it possible to modify the code of lvmlockd to break this limitation and
> let libvirt/qemu guarantee the mutual exclusivity of each read/write IO
> across hosts when live migration?

lvmlockd locking does not apply to the dm i/o layers.  The kind of
multi-host locking that you seem to be talking about would need to be
implemented inside dm-thin to protect on-disk data structures that it
modifies.  In reality you would need to write a new dm target with locking
and data structures designed for that kind of sharing.

Dave
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-01 Thread David Teigland
On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
> Hi all,
> 
> I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
> environment. There are multiple hosts in the cluster using the same iscsi
> connection, and the VMs are running on this environment using thinlv
> volumes. But if want to live migrate the vm, it will be difficult since
> thinlv which from the same thin pool can only be exclusive active on one
> host.
> 
> I found a previous subject that discussed this issue:
> 
> https://lore.kernel.org/all/20180305165926.ga20...@redhat.com/

Hi, in that email I tried to point out that the real problem is not the
locking, but rather the inability of dm-thin to share a thin pool among
multiple hosts.  The locking restrictions just reflect that technical
limitation.

Dave
___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

2022-11-01 Thread Zhiyong Ye

Hi all,

I want to implement live migration of VMs in the lvm + lvmlockd + 
sanlock environment. There are multiple hosts in the cluster using the 
same iscsi connection, and the VMs are running on this environment using 
thinlv volumes. But if want to live migrate the vm, it will be difficult 
since thinlv which from the same thin pool can only be exclusive active 
on one host.


I found a previous subject that discussed this issue:

https://lore.kernel.org/all/20180305165926.ga20...@redhat.com/

The VM in the source host will become suspended after completing the 
drain IO operation, and no new IO will be issued until the VM in the 
destination host resumes again during the live migration. Dave 
recommends to uninstall volumes at the source and activate at the 
destination within this time window.


However, executing the activate/deactivate command for thinlv volumes 
during a VM live migration will cause the VM Guest received an acpi 
message and the Guest will suppose that the disk device has been unmounted.


Or maybe my understanding is off. Can I ask for your help?

Regards,

Zhiyong Ye

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/