Hi David,
IIRC, you mean we can consider to use cluster raid1 as the underlaying DM
target to support pvmove
used in cluster, since currect pvmove is using mirror target now?
That's what I imagined could be done, but I've not thought about it in
detail. IMO pvmove under a shared LV is too comp
On Wed, Jan 10, 2018 at 02:55:42PM +0800, Eric Ren wrote:
> if cluster raid1 is used as PV, data is replicated and data migration is
> nearly equivalent
> to replace disk. However, in scenario PV is on raw disk, pvmove is very
> handy for data migration.
>
> IIRC, you mean we can consider to use c
Hi David,
On 01/09/2018 11:42 PM, David Teigland wrote:
On Tue, Jan 09, 2018 at 10:42:27AM +0800, Eric Ren wrote:
I've tested your patch and it works very well. Thanks very much.
Could you please consider to push this patch upstream?
OK
Thanks very much! So, can we update the `man 8 lvmlo
On Tue, Jan 09, 2018 at 10:42:27AM +0800, Eric Ren wrote:
> > I've tested your patch and it works very well. Thanks very much.
>
> Could you please consider to push this patch upstream?
OK
> Also, Is this the same case for pvmove as lvresize? If so, can we also
> work out a similar patch for pv
Hi David,
On 01/04/2018 05:06 PM, Eric Ren wrote:
David,
On 01/03/2018 11:07 PM, David Teigland wrote:
On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote:
1. one one node: lvextend --lockopt skip -L+1G VG/LV
That option doesn't exist, but illustrates the point that some
new
David,
On 01/03/2018 11:07 PM, David Teigland wrote:
On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote:
1. one one node: lvextend --lockopt skip -L+1G VG/LV
That option doesn't exist, but illustrates the point that some new
option could be used to skip the incompatible LV lock
On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote:
> > 1. one one node: lvextend --lockopt skip -L+1G VG/LV
> >
> > That option doesn't exist, but illustrates the point that some new
> > option could be used to skip the incompatible LV locking in lvmlockd.
>
> Hmm, is it safe to ju
Hello David,
Happy new year!
On 01/03/2018 01:10 AM, David Teigland wrote:
* resizing an LV that is active in the shared mode on multiple hosts
It seems a big limitation to use lvmlockd in cluster:
Only in the case where the LV is active on multiple hosts at once,
i.e. a cluster fs, which is
> * resizing an LV that is active in the shared mode on multiple hosts
>
> It seems a big limitation to use lvmlockd in cluster:
Only in the case where the LV is active on multiple hosts at once,
i.e. a cluster fs, which is less common than a local fs.
In the general case, it's not safe to assum
Hi David,
I see the comments on res_process():
"""
/*
* Go through queued actions, and make lock/unlock calls on the resource
* based on the actions and the existing lock state.
*
* All lock operations sent to the lock manager are non-blocking.
* This is because sanlock does not support loc
Hi David,
I see there is a limitation on lvesizing the LV active on multiple node.
From `man lvmlockd`:
"""
limitations of lockd VGs
...
* resizing an LV that is active in the shared mode on multiple hosts
"""
It seems a big limitation to use lvmlockd in cluster:
"""
c1-n1:~ # lvresize -L-1
11 matches
Mail list logo