I don't get this bug.
I have at least 1 snapshot going on my /home partition all the time.
The VG that /home is in contains most of my partitions (26), with
2 more partitions on a separate (VG+PD's) VG.
Now, I've noticed when I am booting, it *does* take a bit of time to mount
bring up and mount all of the lvs, but you can the root mount is NOT
in an VG/LV -- It's on a regular device (numbers on left are w/kernel time
printing turned on -- so they are in seconds after boot):
[4.207621] XFS (sdc1): Mounting V4 Filesystem
[4.278746] XFS (sdc1): Starting recovery (logdev: internal)
[4.370757] XFS (sdc1): Ending recovery (logdev: internal)
[4.379839] VFS: Mounted root (xfs filesystem) on device 8:33.
..
[4.449462] devtmpfs: mounted
... last msg before my long pause where pretty much everything
get activated:
[4.591580] input: Dell Dell USB Keyboard as
/devices/pci:00/:00:1a.7/usb1/1-3/1-3.2/1-3.2:1.0/0003:413C:2003.0002/input/input4
[4.604588] hid-generic 0003:413C:2003.0002: input,hidraw1: USB HID v1.10
Keyboard [Dell Dell USB Keyboard] on usb-:00:1a.7-3.2/input0
[ 19.331731] showconsole (170) used greatest stack depth: 13080 bytes left
[ 19.412411] XFS (sdc6): Mounting V4 Filesystem
[ 19.505374] XFS (sdc6): Ending clean mount
more mostly unrelated messages... then you start seeing dm's mixed in
with the mounting messages -- just before kernel logging stops:
[ 22.205351] XFS (sdc2): Mounting V4 Filesystem
[ 22.205557] XFS (sdc3): Mounting V4 Filesystem
[ 22.216414] XFS (dm-5): Mounting V4 Filesystem
[ 22.217893] XFS (dm-6): Mounting V4 Filesystem
[ 22.237345] XFS (dm-1): Mounting V4 Filesystem
[ 22.245201] XFS (dm-8): Mounting V4 Filesystem
[ 22.267971] XFS (dm-13): Mounting V4 Filesystem
[ 22.293152] XFS (dm-15): Mounting V4 Filesystem
[ 22.299737] XFS (sdc8): Mounting V4 Filesystem
[ 22.340692] XFS (sdc2): Ending clean mount
[ 22.373169] XFS (sdc3): Ending clean mount
[ 22.401381] XFS (dm-5): Ending clean mount
[ 22.463974] XFS (dm-13): Ending clean mount
[ 22.474813] XFS (dm-1): Ending clean mount
[ 22.494807] XFS (dm-8): Ending clean mount
[ 22.505380] XFS (sdc8): Ending clean mount
[ 22.544059] XFS (dm-15): Ending clean mount
[ 22.557865] XFS (dm-6): Ending clean mount
[ 22.836244] Adding 8393924k swap on /dev/sdc5. Priority:-1 extents:1
across:8393924k FS
Kernel logging (ksyslog) stopped.
Kernel log daemon terminating.
-
A couple of things different about my setup from the 'norm' --
1) since my distro(openSuSE) jumped to systemd, (and I haven't), I had to write
some
rc scripts to help bring up the system.
2) one reason for this was my /usr partition is separate from root and
my distro decided to move many libs/bins -usr and leave symlinks on the
root device to the programs in /usr. One of those was 'mount' (and its
associated libs).
That meant that once the rootfs was booted I had no way to mount /usr, where
most
of the binaries are (I asked why they didn't do it the safe way and move most
of the binaries to /bin /lib64 and put symlinks in /usr but they evaded
answering that question for ~2 years . So one script I run after updating my
system is a
dependency checker that checks mount orders and tries to make sure that early
mounted disks don't have dependencies on later mounted disks.
3) adding to my problem was that I don't use an initrd to boot. I boot
from my hard disk. My distro folks thought they had solved the problem
by hiding the mount of /usr in the initrd, so when they start systemd to
control the boot, it is happy. But if you boot from HD, I was told my
~15 year old configuration was no longer supported. Bleh!,
One thing that might account for speed diffs, is that I don't wait for
udev to start my VG's, ... and here is where I think I see my ~15 second
pause:
if test -d /etc/lvm -a -x /sbin/vgscan -a -x /sbin/vgchange ; then
# Waiting for udev to settle
if [ $LVM_DEVICE_TIMEOUT -gt 0 ] ; then
echo Waiting for udev to settle...
/sbin/udevadm settle --timeout=$LVM_DEVICE_TIMEOUT
fi
echo Scanning for LVM volume groups...
/sbin/vgscan --mknodes
echo Activating LVM volume groups...
/sbin/vgchange -a y $LVM_VGS_ACTIVATED_ON_BOOT
mount -c -a -F
...
So at the point where I have a pause, I'm doing vgscan and vgchange, then
a first shot at mounting all (it was the easiest thing to fix/change).
Without that mount all attempt in my 4th boot script to execute -- boot.lvm,
I often had long timeouts in the boot process. But as you can see, I
tell mount to go fork(-F) and try to mount all FS's at the same time. I'm
pretty sure that's where the pause is given that right after the pause,
XFS starts issuing messages about DM's being mounted.
Somewhere around script #8 is my distro's localfs mounts -- but for me,
that was way too late, since many of the boot utils not only used
/usr, but /usr/share (another partition after