Hi David,

I see the comments on res_process():

"""
/*
 * Go through queued actions, and make lock/unlock calls on the resource
 * based on the actions and the existing lock state.
 *
 * All lock operations sent to the lock manager are non-blocking.
 * This is because sanlock does not support lock queueing.
 * Eventually we could enhance this to take advantage of lock
 * queueing when available (i.e. for the dlm).
"""

Is it the reason why lvmlockd has limitation on lvresize with "sh" lock
because lvmlockd cannot up convert "sh" to "ex" to perform lvresize command?

Regards,
Eric

On 12/28/2017 06:42 PM, Eric Ren wrote:
Hi David,

I see there is a limitation on lvesizing the LV active on multiple node.
From `man lvmlockd`:

"""
limitations of lockd VGs
...
* resizing an LV that is active in the shared mode on multiple hosts
"""

It seems a big limitation to use lvmlockd in cluster:

"""
c1-n1:~ # lvresize -L-1G vg1/lv1
  WARNING: Reducing active logical volume to 1.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg1/lv1? [y/n]: y
  LV is already locked with incompatible mode: vg1/lv1
"""

Node "c1-n1" is the last node having vg1/lv1 active on it.
Can we change the lock mode from "shared" to "exclusive" to
lvresize without having to deactivate the LV on the last node?

It will reduce the availability if we have to deactivate LV on all
nodes to resize. Is there plan to eliminate this limitation in the
near future?

Regards,
Eric

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Reply via email to