Hi,
The reason is found out, as you've predicted.
> is it because of slow read operation (i.e. some raid arrays are known to
> wake-up slowly)
The udev worker is blocked until timeout before killed, cause the
"blkid" in /usr/lib/udev/rules.d/13-dm-disk.rules is too slow because
of high IO load o
Hi,
> Hmm would it be possible that associated thin pool would be in some erroneous
> condition - i.e. out-of-space - or processing some resize ?
The testing model is:
create/mount/"do IO" (on) thin LV; and then umount/delete thin LV.
doing this work flow in parallel.
> This likely could resul
Hi,
> Although the /dev/dm-26 is visible, but the device seems not ready in kernel.
Sorry, it's not:
[root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-26
Device dm-26 not found
Command failed.
[root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup info /dev/dm-25
Name: vg0-21
State:
Hi,
> Since the /dev/dm-x has been created, I don't understand what it waits
> udev to do?
> Just waits udev rules to create device symbol links?
Although the /dev/dm-26 is visible, but the device seems not ready in kernel.
[root@iZuf6dbyd7ede51sykedamZ ~]# dmsetup udevcookies
Cookie Semid
Hi!
> When udev kills its worker due to timeout - so udev rules was not finished
> within predefined timeout (which unfortunately changes according to mind
> change of udev developer - so it ranges from 90seconds to 300seconds depending
> on date of release) - you need to look out for reason why t
hem here so we can check
> which rule could be suspected.
Thanks! From this, I've learned how import device filter setup is!
Regards,
Eric
--
- Eric Ren
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linu
Hi,
As subject, it seems a interaction problem between lvm and systemd-udev:
```
#lvm version
LVM version: 2.02.130(2)-RHEL7 (2015-10-14)
Library version: 1.02.107-RHEL7 (2015-10-14)
Driver version: 4.35.0
```
lvm call trace when hangs:
```
(gdb) bt
#0 0x7f7030b876a7 in semop ()
Hi,
> So do you get 'partial' error on thin-pool activation on your physical
> server ?
Yes, the VG of the thin pool only has one simple physical disk, at
beginning, I also suspected the disk may disconnect at that moment.
But, I start to think maybe it is caused by some reason hidden in the
in
So, we're evaluating such solution now~
Thanks,
Eric
On Thu, 11 Apr 2019 at 19:04, Zdenek Kabelac wrote:
> Dne 11. 04. 19 v 2:27 Eric Ren napsal(a):
> > Hello list,
> >
> > Recently, we're exercising our container environment which uses lvm to
> manage
>
Hi,
> Hi,
I could recommend to orient towards the solution where the 'host' system
> provides some service for your containers - so container ask for action,
> service orchestrates the action on the system - and returns asked resource
> to
> the container.
>
Right, it's all k8s, containerd, OC
, what reasons might cause this errors?
Regards,
Eric
On Thu, 11 Apr 2019 at 08:27, Eric Ren wrote:
> Hello list,
>
> Recently, we're exercising our container environment which uses lvm to
> manage thin LVs, meanwhile we found a very strange error to activate the
> thin
Hello list,
Recently, we're exercising our container environment which uses lvm to
manage thin LVs, meanwhile we found a very strange error to activate the
thin LV:
“Aborting. LV mythinpool_tmeta is now incomplete and '--activationmode
partial' was not specified.\n: exit status 5: unknown"
cent
Hi,
My LVM version:
> ```
> #lvm version
> LVM version: 2.02.180(2)-RHEL7 (2018-07-20)
> Library version: 1.02.149-RHEL7 (2018-07-20)
> Driver version: 4.35.0
> ```
>
> I see you recently pushed this patch, looks like a fix for such problem:
>
> ```
> commit fdb6ef8a85e9adc4805202b320
Hi Marian,
I have a lvm failure below when creating a lot of thin snapshot LVs for
containers as rootfs.
```
*** buffer overflow detected ***: lvcreate terminated\n=== Backtrace:
=\n/lib64/libc.so.6(__fortify_*fail*
+0x37)[0x7f192c2389e7]\n/lib64/libc.so.6(+0x115b62)[0x7f192c236b62]\n
Hi Zdenek,
When 'stripe_size' is the 'right size' - striped device should appear
> faster,
> but telling you what's the best size is some sort of 'black magic' :)
>
> Basically - strip size should match boundaries of thin-pool chunk sizes.
>
> i.e. for thin-pool with 128K chunksize - and 2 disks
Hi,
With single command to create thin-pool, the metadata LV is not created
> with striped
> target. Is this designed on purpose, or just the command doesn't handle
> this case very
> well for now?
>
> My main concern here is, if the metadata LV use stripped target, can
> thin_check/thin_repair to
Hi,
As you can see, only "mythinpool_tdata" LV has 2 stripes. Is that OK?
> If I want to benefit performance from stripes, will it works for me? Or,
> should I create dataLV, metadata LV, thinpool and thin LV using
> step-by-step way
> and specify "--stripes 2" in every steps?
>
With single comma
thin-LV*
when out of space?
Any suggestion would be very appreciated, thanks in advance!
Regards,
Eric Ren
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
╰─$ git describe cd15fb64ee56192760ad5c1e2ad97a65e735b18b
v4.12-rc5-2-gcd15fb64ee56
"""
Eric
Warm regards,
Liwei
On 5 Feb 2018 15:27, "Eric Ren" mailto:z...@suse.com>>
wrote:
Hi,
Your LVM version and kernel version please?
like:
""
able-udev_sync
# uname -a
Linux dataserv 4.14.0-3-amd64 #1 SMP Debian 4.14.13-1 (2018-01-14)
x86_64 GNU/Linux
Warm regards,
Liwei
On 5 Feb 2018 15:27, "Eric Ren" mailto:z...@suse.com>>
wrote:
Hi,
Your LVM version and kernel version please?
like:
&quo
Hi,
Your LVM version and kernel version please?
like:
# lvm version
LVM version: 2.02.177(2) (2017-12-18)
Library version: 1.03.01 (2017-12-18)
Driver version: 4.35.0
# uname -a
Linux sle15-c1-n1 4.12.14-9.1-default #1 SMP Fri Jan 19 09:13:51 UTC
2018 (849a2fe) x86_64 x86_64 x8
Hi David,
IIRC, you mean we can consider to use cluster raid1 as the underlaying DM
target to support pvmove
used in cluster, since currect pvmove is using mirror target now?
That's what I imagined could be done, but I've not thought about it in
detail. IMO pvmove under a shared LV is too comp
Zdenek,
Thanks for helping make this more clear to me :)
There are couple fuzzy sentences - so lets try to make them more clear.
Default mode for 'clvmd' is to 'share' resource everywhere - which
clearly comes from original 'gfs' requirement and 'linear/striped'
volume that can be easily ac
Hi David,
Thanks for your explanations!
On 01/10/2018 12:06 AM, David Teigland wrote:
On Tue, Jan 09, 2018 at 11:15:24AM +0800, Eric Ren wrote:
Hi David,
Regarding the question of the subject, I can think of three main benefits of
lvmlockd over clvmd:
- lvmlockd supports two cluster locking
Hi David,
On 01/09/2018 11:42 PM, David Teigland wrote:
On Tue, Jan 09, 2018 at 10:42:27AM +0800, Eric Ren wrote:
I've tested your patch and it works very well. Thanks very much.
Could you please consider to push this patch upstream?
OK
Thanks very much! So, can we update the `
Hi David,
Regarding the question of the subject, I can think of three main
benefits of lvmlockd over clvmd:
- lvmlockd supports two cluster locking plugins: dlm and sanlock.
sanlock plugin can supports up to ~2000 nodes
that benefits LVM usage in big virtulizaton/storage cluster, while dlm
Hi David,
On 01/04/2018 05:06 PM, Eric Ren wrote:
David,
On 01/03/2018 11:07 PM, David Teigland wrote:
On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote:
1. one one node: lvextend --lockopt skip -L+1G VG/LV
That option doesn't exist, but illustrates the point that some
David,
On 01/03/2018 11:07 PM, David Teigland wrote:
On Wed, Jan 03, 2018 at 11:52:34AM +0800, Eric Ren wrote:
1. one one node: lvextend --lockopt skip -L+1G VG/LV
That option doesn't exist, but illustrates the point that some new
option could be used to skip the incompatib
Hello David,
Happy new year!
On 01/03/2018 01:10 AM, David Teigland wrote:
* resizing an LV that is active in the shared mode on multiple hosts
It seems a big limitation to use lvmlockd in cluster:
Only in the case where the LV is active on multiple hosts at once,
i.e. a cluster fs, which is
" to perform lvresize command?
Regards,
Eric
On 12/28/2017 06:42 PM, Eric Ren wrote:
Hi David,
I see there is a limitation on lvesizing the LV active on multiple node.
From `man lvmlockd`:
"""
limitations of lockd VGs
...
* resizing an LV that is active in the shared mode
Hi,
Not sure if you are looking for vgexport/vgimport (man 8 vgimport)?
Eric
On 12/21/2017 07:36 PM, Boyd Kelly wrote:
Hi,
I've searched hi/low and not found a howto or general suggestions for
migrating volume groups from an old to new laptop. I've found mostly
server scenarios for replacin
Hi David,
I see there is a limitation on lvesizing the LV active on multiple node.
From `man lvmlockd`:
"""
limitations of lockd VGs
...
* resizing an LV that is active in the shared mode on multiple hosts
"""
It seems a big limitation to use lvmlockd in cluster:
"""
c1-n1:~ # lvresize -L-1
Hi David,
I's afraid the statement below in description section of lvmlockd manpage:
"
· prevent concurrent activation of logical volumes
"
is easy for normal user to mistake it as: wow, lvmlockd doesn't support
active-active
LV on multiple nodes?
What I interpret from it is:
with clvmd, 'v
Hi,
/"I noticed you didn't configure LVM resource agent to manage your
VG's (de)activation task,
not sure if it can always work as expect, so have more exceptional
checking :)"
/
Strangely the Pacemaker active-passive configuration
example shows VG controlled by Pacemaker, whi
David,
First you'll need this recent fix:
https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=f611b68f3c02b9af2521d7ea61061af3709fe87c
--force was broken at some point, and the option is now --lockopt force.
Thanks!
To change between lock types, you are supposed to be able to change to a
Hello David,
On my testing cluster, the lvmlockd was firstly used with sanlock and
everything was OK.
After some play, I want to change the "sanlock" lock_type of a VG into
"dlm" locktype.
With dlm_controld running, I tried as following, but still failed.
1. Performed as the "Changing dlm
ip_01
--
# pcs resource show
--
---
Regards,
Indivar Nair
On Mon, Oct 16, 2017 at 8:36 AM, Eric Ren <mailto:z...@suse.com>> wrote:
Hi,
On 10/13/2017 06:40 PM, I
Hi Alasdair,
Very simply if the metadata the command has just read in does not match
the last backup stored in the local filesystem and the process is able
and configured to write a new backup.
The command that made the metadata change might not have written a
backup if it crashed, was configur
Hi all,
I performed a fault tolerance testing on cmirrored LV in cluster with
lvm2-2.02.98.
The result really surprises me: a cmirrored LV cannot continue to work
after disabling
its one leg PV. Is this a known issue? Or am I doing something wrong?
The steps follows:
# clvmd and cmirrord s
Hi all,
>
> Interesting. Eric, can you show the *before* and *after* vgs textual
> metadata (you should find them in /etc/lvm/archive)?
>
Ah, I think no need to show the archives now. Alasdair and David have
given us a very good explanation, thanks for them!
Regards,
Eric
Hi all,
Sometimes, I see the following message in the VG metadata backups under
/etc/lvm/archive:
"""
contents = "Text Format Volume Group"
version = 1
description = "Created *before* executing 'vgs'"
"""
I'm wondering when and why the new backups will be created by reporting
command like v
Hi,
On 10/13/2017 06:40 PM, Indivar Nair wrote:
Thanks Eric,
I want to keep a single VG so that I can get the bandwidth (LVM
Striping) of all the disks (PVs)
PLUS
the flexibility to adjust the space allocation between both LVs. Each
LV will be used by different departments. With 1 LV on d
Hi,
With CLVM / HA-LVM on a 2 node cluster -
Is it possible to have a shared VG but separate LVs, with each LV
exclusively activated on different nodes in a 2 node cluster.
In case of a failure, the LV of the failed node will be activated on
the other node.
I think clvm can do what you want
Hi Zdenek,
lvm2 is using upstream community BZ located here:
https://bugzilla.redhat.com/enter_bug.cgi?product=LVM%20and%20device-mapper
You can check RHBZ easily for all lvm2 bZ
(mixes RHEL/Fedora/Upstream)
We usually want to have upstream BZ being linked with Community BZ,
but sometime
Hi David and Zdenek,
On 09/12/2017 01:41 AM, David Teigland wrote:
[...snip...]
Hi Eric, this is a good question. The lvm project has done a poor job at
this sort of thing. A new homepage has been in the works for a long time,
but I think stalled in the review/feedback stage. It should be un
Hi Zdenek,
On 09/11/2017 09:11 PM, Zdenek Kabelac wrote:
[..snip...]
So don't expect lvm2 team will be solving this - there are more prio
work
Sorry for interrupting your discussion. But, I just cannot help to ask:
It's not the first time I see "there are more prio work". So I'm
wonder
h case, we need "clvmd -R" on one of the nodes.
BTW, my versions:
lvm2-clvm-2.02.120-72.8.x86_64
lvm2-2.02.120-72.8.x86_64
Regards,
Eric
2017-04-20 10:06 GMT+02:00 Eric Ren :
Hi!
This issue can be replicated by the following steps:
1. setup two-node HA cluster with dlm and clvmd RAs conf
in some auto scripts, it's boring to put "clvmd -R" before some lvm commands
everywhere.
So, is there an option to enable full scan every time when lvm is invoked in
cluster scenario?
Thanks in advance:)
Regards,
Eric
On 04/14/2017 06:27 PM, Eric Ren wrote:
Hi!
In cluster en
On 03/09/2017 07:46 PM, Zdenek Kabelac wrote:
[snip]
while it works when specifying '-l' this way:
# lvcreate -l 100%FREE --thinpool thinpool0 vgtest
Logical volume "thinpool0" created.
Is this something by design? or something may be wrong?
I can replicate this on both:
Hi
Yes this is b
Hello,
I find that it will fail to create a thin pool with all the free PE in VG as
follows:
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdblvm2 --- 200.00g 200.00g
# vgcreate vgtest /dev/sdb
Volume group "vgtest" successfully created
# pvdisplay
--- Physical volume ---
Hi,
On 02/24/2017 03:25 AM, Georges Giralt wrote:
Hello !
I'm sorry to ask for help here, but I'm lost in the middle of nowhere ...
The context :
A PC hardware with an UEFI "BIOS" set to legacy boot only.
3 disks with 2 partitions each. The 3 first partitions are set in software mirror (md0)
Hi David!
On 01/10/2017 11:30 PM, David Teigland wrote:
On Tue, Jan 10, 2017 at 09:02:36PM +0800, Eric Ren wrote:
Hi David,
Sorry for faking this reply because I'm not in the maillist before I noticed
this email (quoted blow) you posted for a while.
I have a questions about &quo
Hi David,
Sorry for faking this reply because I'm not in the maillist before I noticed
this email (quoted blow) you posted for a while.
I have a questions about "lvmlockd":
Besides clvmd cannot be used together with lvmetad, is there any other
main differences between "lvmetad" and "clvmd"? Do
53 matches
Mail list logo