Bug#797800: lvm2 configure.in, librt.so vs librt.pc

2015-09-03 Thread Peter Rajnoha
On 09/02/2015 07:35 PM, Andreas Henriksson wrote:
> Hello Alasdair G Kergon.
> 
> I'm mailing you because of an issue I've run into which I think comes
> from your commit:
> https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=3cd644aeb5cac432a92ad50584973a3430168ed6
> 
> On Debian there's librt.so but no librt.pc.
> 

I think librt.pc file is not present at all (at least
not in glibc upstream repo).

I've patched this, the -lrt should be stated under
Libs.private:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=fc35b6988d65fe3f11a3f668cd8f01fe3294b562

There's also -lm which is also used, but it was
not mentioned in the devmapper.pc.

-- 
Peter



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
Just noticed this option is not yet documented!

I've filed a report for udev guys to add mention
this in the man page and describe it a bit since
it's quite important and yet it's hidden functionality
if not documented:

https://bugzilla.redhat.com/show_bug.cgi?id=1247210


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
On 07/27/2015 04:12 PM, Peter Rajnoha wrote:
> It's the OPTIONS+="db_persist" that needs to be used in initramfs
> for MD devices. This marks udev db records related to this device with
> sticky bit then which is then recognized by udev code and the udev
> db state is not cleaned up in that case:

For example, dracut (the initramfs environment used also in RH systems)
has these rules to handle MD devices (it has the OPTIONS+="db_persist"):

https://github.com/haraldh/dracut/blob/master/modules.d/90mdraid/59-persistent-storage-md.rules

If you already use this in Debian and it doesn't work, it must be
a regression in some version of udev as I've already gone through
this with Harald Hoyer and Kay Sievers who take care of udev.

Simply, this is the correct sequence that should be used:

initramfs:
 - udev running in initramfs
 
 - mark records with OPTIONS+="db_persist" for devices that require that
   (currently it's the MD and DM)
 - udev in initramfs stopped
 - udev database copied from initramfs to root fs

--- switch to root fs ---

 - udev running in root fs
 - udevadm info --cleanup-db (but will keep the records marked from
initramfs with the db_persist flag)
 - udevadm trigger --action=add for the coldplug

-- 
Peter


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
On 07/27/2015 03:57 PM, Peter Rajnoha wrote:
> That's how it was supposed to work. I can imagine the problematic
> part here may be the transfer of the udev database state from initramfs
> to root fs - there is a special way that udev uses to mark devices
> so that the udev db state is kept from initramfs - I need to recall
> that/check that because I don't remember that method right now...
> 

It's the OPTIONS+="db_persist" that needs to be used in initramfs
for MD devices. This marks udev db records related to this device with
sticky bit then which is then recognized by udev code and the udev
db state is not cleaned up in that case:

https://github.com/systemd/systemd/blob/master/src/udev/udevadm-info.c#L220

(the udevadm-info --cleanup-db - the records marked with sticky bit persist)

So once this udev db state is properly handed over from initramfs to root fs,
the rules in 69-dm-lvm-metad.rules should work (as it will use the
IMPORT{db}="LVM_MD_PV_ACTIVATED" to retrieve the state from previous runs
and this should fire pvscan then on coldplug properly:

  IMPORT{db}="LVM_MD_PV_ACTIVATED"
  ACTION=="add", ENV{LVM_MD_PV_ACTIVATED}=="1", GOTO="lvm_scan"
-- 
Peter


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#791869: lvm2: updating src:lvm2 from 2.02.111-2.2 to 2.02.122-1 breaks booting, mounting LVs other than / fails

2015-07-27 Thread Peter Rajnoha
On 07/25/2015 09:34 PM, Bastian Blank wrote:
> Hi Peter
> 
> Currently I think that all this problems are related to missing or
> broken pvscan --cache calls.
> 
> I found one problematic case regarding coldplug; I believe Redhat does
> not longer use this code path.  In none of my tests the "artificial" add
> event triggers pvscan as it should.  The udev rules test for
> LVM_MD_PV_ACTIVATED, which is never set in this case.

The MD here is very similar to DM in a way it is activated -
the MD device is created first (the ADD event) and then initialized
(the CHANGE event).

So we're expecting the CHANGE event with appearing md/array_state sysfs
attribute to declare the MD as initialized (and hence marked with
LVM_MD_PV_ACTIVATED=1).

When this MD activation/initialization happens in initramfs, the udev
database state needs to be transfered over from initramfs to root fs for
the MD device.

We're always doing IMPORT{db} for the LVM_MD_PV_ACTIVATED variable
so the rules can check whether the MD device is ready to use or not.

When switching to root fs and when the coldplug is done, the
ADD event is generated for the MD device - when we have ADD event
and at the same time we have LVM_MD_PV_ACTIVATED=1, we know this is
artificial event (the "coldplug" one) and we do jump to the pvscan
in that case.

That's how it was supposed to work. I can imagine the problematic
part here may be the transfer of the udev database state from initramfs
to root fs - there is a special way that udev uses to mark devices
so that the udev db state is kept from initramfs - I need to recall
that/check that because I don't remember that method right now...

-- 
Peter


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#792002: lvm2-monitor service causes long delay at boot (encrypted root/swap)

2015-07-10 Thread Peter Rajnoha
On 07/10/2015 01:48 AM, Josh Triplett wrote:
> Package: lvm2
> Version: 2.02.122-1
> Severity: grave
> File: /lib/systemd/system/lvm2-monitor.service
> 
> On a laptop with encrypted root and swap, I now get a minutes-long delay at 
> boot
> time, due to lvm2-monitor.  Here's the complete set of messages at boot
> (transcribed from a photo of the screen):
> 
> Loading, please wait...
>   /run/lvm/lvmetad.socket: connect failed: No such file or directory
>   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
>   Volume group "data" not found
>   Cannot process volume group data
> Unable to find LVM volume data/root
>   /run/lvm/lvmetad.socket: connect failed: No such file or directory
>   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
>   Volume group "data" not found
>   Cannot process volume group data
> Unable to find LVM volume data/swap
> Please unlock disk sda2_crypt:
>   /run/lvm/lvmetad.socket: connect failed: No such file or directory
>   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
>   Reading all physical volumes.  This may take a while...
>   Found volume group "data" using metadata type lvm2
>   /run/lvm/lvmetad.socket: connect failed: No such file or directory
>   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
>   2 logical volume(s) in volume group "data" now active
> cryptsetup: sda2_crypt set up successfully
> fsck from util-linux 2.26.2
> /dev/mapper/data-root: clean, [...]
> [  ] A start job is running for Monitoring of LVM2 mirrors, snapshots 
> etc. using dmeventd or progress polling (59s / no limit)
> 
> 
> That last line matches the description in lvm2-monitor.service.
> 
> (The preceeding lvm2 errors may or may not be related.  The recurring
> two lines of lvmetad errors are new, as is the long delay on
> lvm2-monitor.service; the errors before unlocking the disk about not
> finding data/root and data/swap occurred with previous versions of
> lvm2.)

If initrd is generated, the existing lvm.conf needs to be modified
so the configuration is suitable for initrd environment (where I
suppose lvmetad daemon is not running). So the script generating
initrd needs to modify lvm.conf so that use_lvmetad=0 is used.

In case lvmetad daemon is not running in initrd, which is, I suppose,
exactly the case here - running lvmetad (the LVM metadata caching
daemon) is not quite useful in initrd anyway as lvmetad would need
to be started again after switching to root fs.

So that's probably the first problem here.

If those "falling back to internal scanning" messages appear
even after switching to root fs, please check if lvm2-lvmetad.socket
is enabled (so it can instantiate lvm2-lvmetad.service on
first socket access):

systemctl status lvm2-lvmetad.socket

If it's disabled then the distro needs to make sure it's always
enabled so whenever use_lvmetad=1 is used, the lvm2-lvmetad.service
can be instantiated automatically.

Let's check these things first before debugging the lvm2-monitor.service
delay...

-- 
Peter


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Bug#599596: /sbin/lvcreate: Does not clean up semaphore arrays after use

2010-10-12 Thread Peter Rajnoha
On 10/11/2010 12:24 PM +0100, Sam Morris wrote:
> It appears that each lvcreate/lvremove operation creates four cookies,
> but 'dmsetup udevcomplete' is run only once, thereby leaking three
> cookies per operation.

Can you run that lvcreate/lvremove again with "-" debug messages
together with the udevd debug log and the output of "dmsetup udevcookies"
at the end of the run (making sure there are no cookies in system before
the run, of course). This way we should see and pair the cookies that were
processed and the ones that remained. We should also see at which stage
of the lvm command execution the cookie left in the system is actually
created. Thanks.




-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org