Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-14 Thread Eric Ren
Hi, The reason is found out, as you've predicted. > is it because of slow read operation (i.e. some raid arrays are known to > wake-up slowly) The udev worker is blocked until timeout before killed, cause the "blkid" in /usr/lib/udev/rules.d/13-dm-disk.rules is too slow because of high IO load o

Re: [linux-lvm] [lvm-devel] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, > Hmm would it be possible that associated thin pool would be in some erroneous > condition - i.e. out-of-space - or processing some resize ? The testing model is: create/mount/"do IO" (on) thin LV; and then umount/delete thin LV. doing this work flow in parallel. > This likely could resul

Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, > Although the /dev/dm-26 is visible, but the device seems not ready in kernel. Sorry, it's not: [root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-26 Device dm-26 not found Command failed. [root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-25 Name: vg0-21 State:

Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, > Since the /dev/dm-x has been created, I don't understand what it waits > udev to do? > Just waits udev rules to create device symbol links? Although the /dev/dm-26 is visible, but the device seems not ready in kernel. [root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup udevcookies Cookie Semid

Re: [linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi! > When udev kills its worker due to timeout - so udev rules was not finished > within predefined timeout (which unfortunately changes according to mind > change of udev developer - so it ranges from 90seconds to 300seconds depending > on date of release) - you need to look out for reason why t

Re: [linux-lvm] [lvm-devel] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-12 Thread Eric Ren
hem here so we can check > which rule could be suspected. Thanks! From this, I've learned how import device filter setup is! Regards, Eric -- - Eric Ren ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linu

[linux-lvm] lvcreate hangs forever and udev work timeout

2019-04-12 Thread Eric Ren
Hi, As subject, it seems a interaction problem between lvm and systemd-udev: ``` #lvm version LVM version: 2.02.130(2)-RHEL7 (2015-10-14) Library version: 1.02.107-RHEL7 (2015-10-14) Driver version: 4.35.0 ``` lvm call trace when hangs: ``` (gdb) bt #0 0x7f7030b876a7 in semop ()

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
Hi, > So do you get 'partial' error on thin-pool activation on your physical > server ? Yes, the VG of the thin pool only has one simple physical disk, at beginning, I also suspected the disk may disconnect at that moment. But, I start to think maybe it is caused by some reason hidden in the in

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
So, we're evaluating such solution now~ Thanks, Eric On Thu, 11 Apr 2019 at 19:04, Zdenek Kabelac wrote: > Dne 11. 04. 19 v 2:27 Eric Ren napsal(a): > > Hello list, > > > > Recently, we're exercising our container environment which uses lvm to > manage >

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
Hi, > Hi, I could recommend to orient towards the solution where the 'host' system > provides some service for your containers - so container ask for action, > service orchestrates the action on the system - and returns asked resource > to > the container. > Right, it's all k8s, containerd, OC

Re: [linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
, what reasons might cause this errors? Regards, Eric On Thu, 11 Apr 2019 at 08:27, Eric Ren wrote: > Hello list, > > Recently, we're exercising our container environment which uses lvm to > manage thin LVs, meanwhile we found a very strange error to activate the > thin

[linux-lvm] Aborting. LV mythinpool_tmeta is now incomplete

2019-04-11 Thread Eric Ren
Hello list, Recently, we're exercising our container environment which uses lvm to manage thin LVs, meanwhile we found a very strange error to activate the thin LV: “Aborting. LV mythinpool_tmeta is now incomplete and '--activationmode partial' was not specified.\n: exit status 5: unknown" cent

Re: [linux-lvm] buffer overflow detected: lvcreate terminated

2019-04-04 Thread Eric Ren
Hi, My LVM version: > ``` > #lvm version > LVM version: 2.02.180(2)-RHEL7 (2018-07-20) > Library version: 1.02.149-RHEL7 (2018-07-20) > Driver version: 4.35.0 > ``` > > I see you recently pushed this patch, looks like a fix for such problem: > > ``` > commit fdb6ef8a85e9adc4805202b320

[linux-lvm] buffer overflow detected: lvcreate terminated

2019-04-03 Thread Eric Ren
Hi Marian, I have a lvm failure below when creating a lot of thin snapshot LVs for containers as rootfs. ``` *** buffer overflow detected ***: lvcreate terminated\n=== Backtrace: =\n/lib64/libc.so.6(__fortify_*fail* +0x37)[0x7f192c2389e7]\n/lib64/libc.so.6(+0x115b62)[0x7f192c236b62]\n

Re: [linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-28 Thread Eric Ren
Hi Zdenek, When 'stripe_size' is the 'right size' - striped device should appear > faster, > but telling you what's the best size is some sort of 'black magic' :) > > Basically - strip size should match boundaries of thin-pool chunk sizes. > > i.e. for thin-pool with 128K chunksize - and 2 disks

Re: [linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-25 Thread Eric Ren
Hi, With single command to create thin-pool, the metadata LV is not created > with striped > target. Is this designed on purpose, or just the command doesn't handle > this case very > well for now? > > My main concern here is, if the metadata LV use stripped target, can > thin_check/thin_repair to

Re: [linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-25 Thread Eric Ren
Hi, As you can see, only "mythinpool_tdata" LV has 2 stripes. Is that OK? > If I want to benefit performance from stripes, will it works for me? Or, > should I create dataLV, metadata LV, thinpool and thin LV using > step-by-step way > and specify "--stripes 2" in every steps? > With single comma

[linux-lvm] Question about thin-pool/thin LV with stripes

2019-01-23 Thread Eric Ren
thin-LV* when out of space? Any suggestion would be very appreciated, thanks in advance! Regards, Eric Ren ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Unsync-ed LVM Mirror

2018-02-05 Thread Eric Ren
╰─$ git describe cd15fb64ee56192760ad5c1e2ad97a65e735b18b v4.12-rc5-2-gcd15fb64ee56 """ Eric Warm regards, Liwei On 5 Feb 2018 15:27, "Eric Ren" mailto:z...@suse.com>> wrote: Hi, Your LVM version and kernel version please? like: ""

Re: [linux-lvm] Unsync-ed LVM Mirror

2018-02-05 Thread Eric Ren
able-udev_sync # uname -a Linux dataserv 4.14.0-3-amd64 #1 SMP Debian 4.14.13-1 (2018-01-14) x86_64 GNU/Linux Warm regards, Liwei On 5 Feb 2018 15:27, "Eric Ren" mailto:z...@suse.com>> wrote: Hi, Your LVM version and kernel version please? like: &quo

Re: [linux-lvm] Unsync-ed LVM Mirror

2018-02-04 Thread Eric Ren
Hi, Your LVM version and kernel version please? like: # lvm version   LVM version: 2.02.177(2) (2017-12-18)   Library version: 1.03.01 (2017-12-18)   Driver version:  4.35.0 # uname -a Linux sle15-c1-n1 4.12.14-9.1-default #1 SMP Fri Jan 19 09:13:51 UTC 2018 (849a2fe) x86_64 x86_64 x8

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-11 Thread Eric Ren
Hi David, IIRC, you mean we can consider to use cluster raid1 as the underlaying DM target to support pvmove used in cluster, since currect pvmove is using mirror target now? That's what I imagined could be done, but I've not thought about it in detail. IMO pvmove under a shared LV is too comp

Re: [linux-lvm] The benefits of lvmlockd over clvmd?

2018-01-10 Thread Eric Ren
Zdenek, Thanks for helping make this more clear to me :) There are couple fuzzy sentences - so lets try to make them more clear. Default mode for 'clvmd' is to 'share' resource everywhere - which clearly comes from original 'gfs' requirement and 'linear/striped' volume that can be easily ac

Re: [linux-lvm] The benefits of lvmlockd over clvmd?

2018-01-09 Thread Eric Ren
Hi David, Thanks for your explanations! On 01/10/2018 12:06 AM, David Teigland wrote: On Tue, Jan 09, 2018 at 11:15:24AM +0800, Eric Ren wrote: Hi David, Regarding the question of the subject, I can think of three main benefits of lvmlockd over clvmd: - lvmlockd supports two cluster locking

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-09 Thread Eric Ren
Hi David, On 01/09/2018 11:42 PM, David Teigland wrote: On Tue, Jan 09, 2018 at 10:42:27AM +0800, Eric Ren wrote: I've tested your patch and it works very well.  Thanks very much. Could you please consider to push this patch upstream? OK Thanks very  much! So, can we update the `

[linux-lvm] The benefits of lvmlockd over clvmd?

2018-01-08 Thread Eric Ren
Hi David, Regarding the question of the subject, I can think of three main benefits of lvmlockd over clvmd: - lvmlockd supports two cluster locking plugins: dlm and sanlock. sanlock plugin can supports up to ~2000 nodes  that benefits LVM usage in big virtulizaton/storage cluster, while dlm

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-08 Thread Eric Ren
Hi David, On 01/04/2018 05:06 PM, Eric Ren wrote: David, On 01/03/2018 11:07 PM, David Teigland wrote: On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote: 1. one one node: lvextend --lockopt skip -L+1G VG/LV That option doesn't exist, but illustrates the point that some

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-04 Thread Eric Ren
David, On 01/03/2018 11:07 PM, David Teigland wrote: On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote: 1. one one node: lvextend --lockopt skip -L+1G VG/LV That option doesn't exist, but illustrates the point that some new option could be used to skip the incompatib

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-02 Thread Eric Ren
Hello David, Happy new year! On 01/03/2018 01:10 AM, David Teigland wrote: * resizing an LV that is active in the shared mode on multiple hosts It seems a big limitation to use lvmlockd in cluster: Only in the case where the LV is active on multiple hosts at once, i.e. a cluster fs, which is

Re: [linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2018-01-02 Thread Eric Ren
" to perform lvresize command? Regards, Eric On 12/28/2017 06:42 PM, Eric Ren wrote: Hi David, I see there is a limitation on lvesizing the LV active on multiple node. From `man lvmlockd`: """ limitations of lockd VGs ... * resizing an LV that is active in the shared mode

Re: [linux-lvm] Migrate volumes to new laptop

2017-12-30 Thread Eric Ren
Hi, Not sure if you are looking for vgexport/vgimport (man 8 vgimport)? Eric On 12/21/2017 07:36 PM, Boyd Kelly wrote: Hi, I've searched hi/low and not found a howto or general suggestions for migrating volume groups from an old to new laptop.  I've found mostly server scenarios for replacin

[linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

2017-12-28 Thread Eric Ren
Hi David, I see there is a limitation on lvesizing the LV active on multiple node. From `man lvmlockd`: """ limitations of lockd VGs ... * resizing an LV that is active in the shared mode on multiple hosts """ It seems a big limitation to use lvmlockd in cluster: """ c1-n1:~ # lvresize -L-1

[linux-lvm] lvmlockd manpage: prevent concurrent activation of logical volumes?

2017-12-28 Thread Eric Ren
Hi David, I's afraid the statement below in description section of lvmlockd manpage: " · prevent concurrent activation of logical volumes " is easy for normal user to mistake it as: wow, lvmlockd doesn't support active-active LV on multiple nodes? What I interpret from it is: with clvmd, 'v

Re: [linux-lvm] Shared VG, Separate LVs

2017-11-23 Thread Eric Ren
Hi, /"I noticed you didn't configure LVM resource agent to manage your VG's (de)activation task, not sure if it can always work as expect, so have more exceptional checking :)" /              Strangely the Pacemaker active-passive configuration example shows VG controlled by Pacemaker, whi

Re: [linux-lvm] lvmlockd: how to convert lock_type from sanlock to dlm?

2017-11-20 Thread Eric Ren
David, First you'll need this recent fix: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=f611b68f3c02b9af2521d7ea61061af3709fe87c --force was broken at some point, and the option is now --lockopt force. Thanks! To change between lock types, you are supposed to be able to change to a

[linux-lvm] lvmlockd: how to convert lock_type from sanlock to dlm?

2017-11-20 Thread Eric Ren
Hello David, On my testing cluster,  the lvmlockd was firstly used with sanlock and everything was OK. After some play, I want to change the "sanlock" lock_type of a VG into "dlm" locktype. With dlm_controld running, I tried as following, but still failed. 1.  Performed as the "Changing dlm

Re: [linux-lvm] Shared VG, Separate LVs

2017-11-14 Thread Eric Ren
ip_01 -- # pcs resource show -- --- Regards, Indivar Nair On Mon, Oct 16, 2017 at 8:36 AM, Eric Ren <mailto:z...@suse.com>> wrote: Hi, On 10/13/2017 06:40 PM, I

Re: [linux-lvm] When and why vgs command can change metadata and incur old metadata to be backed up?

2017-11-04 Thread Eric Ren
Hi Alasdair, Very simply if the metadata the command has just read in does not match the last backup stored in the local filesystem and the process is able and configured to write a new backup. The command that made the metadata change might not have written a backup if it crashed, was configur

[linux-lvm] Does cmirror can tolerate one faulty PV?

2017-10-31 Thread Eric Ren
Hi all, I performed a fault tolerance testing on cmirrored LV in cluster with lvm2-2.02.98. The result really surprises me: a cmirrored LV cannot continue to work after disabling its one leg PV. Is this a known issue? Or am I doing something wrong? The steps follows: # clvmd and cmirrord s

Re: [linux-lvm] When and why vgs command can change metadata and incur old metadata to be backed up?

2017-10-31 Thread Eric Ren
Hi all, > > Interesting. Eric, can you show the *before* and *after* vgs textual > metadata (you should find them in /etc/lvm/archive)? > Ah, I think no need to show the archives now. Alasdair and David have given us a very good explanation, thanks for them! Regards, Eric

[linux-lvm] When and why vgs command can change metadata and incur old metadata to be backed up?

2017-10-29 Thread Eric Ren
Hi all, Sometimes, I see the following message in the VG metadata backups under /etc/lvm/archive: """ contents = "Text Format Volume Group" version = 1 description = "Created *before* executing 'vgs'" """ I'm wondering when and why the new backups will be created by reporting command like v

Re: [linux-lvm] Shared VG, Separate LVs

2017-10-16 Thread Eric Ren
Hi, On 10/13/2017 06:40 PM, Indivar Nair wrote: Thanks Eric, I want to keep a single VG so that I can get the bandwidth (LVM Striping) of all the disks (PVs)   PLUS the flexibility to adjust the space allocation between both LVs. Each LV will be used by  different departments. With 1 LV on d

Re: [linux-lvm] Shared VG, Separate LVs

2017-10-13 Thread Eric Ren
Hi, With CLVM / HA-LVM on a 2 node cluster - Is it possible to have a shared VG but separate LVs, with each LV exclusively activated on different nodes in a 2 node cluster. In case of a failure, the LV of the failed node will be activated on the other node. I think clvm can do what you want

Re: [linux-lvm] Reserve space for specific thin logical volumes

2017-09-11 Thread Eric Ren
Hi Zdenek, lvm2 is using upstream community BZ located here: https://bugzilla.redhat.com/enter_bug.cgi?product=LVM%20and%20device-mapper You can check RHBZ easily for all lvm2 bZ (mixes RHEL/Fedora/Upstream) We usually want to have upstream BZ being linked with Community BZ, but sometime

Re: [linux-lvm] Reserve space for specific thin logical volumes

2017-09-11 Thread Eric Ren
Hi David and Zdenek, On 09/12/2017 01:41 AM, David Teigland wrote: [...snip...] Hi Eric, this is a good question. The lvm project has done a poor job at this sort of thing. A new homepage has been in the works for a long time, but I think stalled in the review/feedback stage. It should be un

Re: [linux-lvm] Reserve space for specific thin logical volumes

2017-09-11 Thread Eric Ren
Hi Zdenek, On 09/11/2017 09:11 PM, Zdenek Kabelac wrote: [..snip...] So don't expect lvm2 team will be solving this - there are more prio work Sorry for interrupting your discussion. But, I just cannot help to ask: It's not the first time I see "there are more prio work". So I'm wonder

Re: [linux-lvm] clvm: failed to activate logical volumes sometimes

2017-04-20 Thread Eric Ren
h case, we need "clvmd -R" on one of the nodes. BTW, my versions: lvm2-clvm-2.02.120-72.8.x86_64 lvm2-2.02.120-72.8.x86_64 Regards, Eric 2017-04-20 10:06 GMT+02:00 Eric Ren : Hi! This issue can be replicated by the following steps: 1. setup two-node HA cluster with dlm and clvmd RAs conf

Re: [linux-lvm] clvm: failed to activate logical volumes sometimes

2017-04-20 Thread Eric Ren
in some auto scripts, it's boring to put "clvmd -R" before some lvm commands everywhere. So, is there an option to enable full scan every time when lvm is invoked in cluster scenario? Thanks in advance:) Regards, Eric On 04/14/2017 06:27 PM, Eric Ren wrote: Hi! In cluster en

Re: [linux-lvm] is it right to specify '-l' with all the free PE in VG when creating a thin pool?

2017-03-09 Thread Eric Ren
On 03/09/2017 07:46 PM, Zdenek Kabelac wrote: [snip] while it works when specifying '-l' this way: # lvcreate -l 100%FREE --thinpool thinpool0 vgtest Logical volume "thinpool0" created. Is this something by design? or something may be wrong? I can replicate this on both: Hi Yes this is b

[linux-lvm] is it right to specify '-l' with all the free PE in VG when creating a thin pool?

2017-03-09 Thread Eric Ren
Hello, I find that it will fail to create a thin pool with all the free PE in VG as follows: # pvs PV VG Fmt Attr PSize PFree /dev/sdblvm2 --- 200.00g 200.00g # vgcreate vgtest /dev/sdb Volume group "vgtest" successfully created # pvdisplay --- Physical volume ---

Re: [linux-lvm] Sorry to ask here ...

2017-03-01 Thread Eric Ren
Hi, On 02/24/2017 03:25 AM, Georges Giralt wrote: Hello ! I'm sorry to ask for help here, but I'm lost in the middle of nowhere ... The context : A PC hardware with an UEFI "BIOS" set to legacy boot only. 3 disks with 2 partitions each. The 3 first partitions are set in software mirror (md0)

Re: [linux-lvm] New features for using lvm on shared storage

2017-01-10 Thread Eric Ren
Hi David! On 01/10/2017 11:30 PM, David Teigland wrote: On Tue, Jan 10, 2017 at 09:02:36PM +0800, Eric Ren wrote: Hi David, Sorry for faking this reply because I'm not in the maillist before I noticed this email (quoted blow) you posted for a while. I have a questions about &quo

Re: [linux-lvm] New features for using lvm on shared storage

2017-01-10 Thread Eric Ren
Hi David, Sorry for faking this reply because I'm not in the maillist before I noticed this email (quoted blow) you posted for a while. I have a questions about "lvmlockd": Besides clvmd cannot be used together with lvmetad, is there any other main differences between "lvmetad" and "clvmd"? Do