[Touch-packages] [Bug 1423796] Re: Unable to mount lvmcache root device at boot time

2016-11-27 Thread Astara
@George:
George Moutsopoulos (gmoutso) wrote on 2015-04-29: 
> OK, I managed! I had to add more modules
> 
> sudo echo "dm_cache" >> /etc/initramfs-tools/modules
> sudo echo "dm_cache_mq" >> /etc/initramfs-tools/modules
> sudo echo "dm_persistent_data" >> /etc/initramfs-tools/modules
> sudo echo "dm_bufio" >> /etc/initramfs-tools/module
---
Not exactly related to the original bug, but wanted to address a boo-boo in the 
above.
sudo echo "string" won't use root's access/permissions to write to the files 
when using the shell-redirection operators '>' or '>>'.  sudo will execute 
'echo' with root privs, and that will echo "dm_cache" (et al.), however, that 
output is redirected by the *SHELL* (running as
"whoever is running the shell") -- usually the user.

To get around this problem (and not saying there might not be any easier ways, 
as this seemed a bit roundabout to get what I wanted (have root write the 
output to "wherever"), I used
"dd", as in:

echo "dm_cache"|sudo dd of=destination status=none

 or to append:

echo "dm_cache"|sudo dd status=none oflag=append conv=notrunc of=/etc
/initramfs-tools/modules

 or to do all at once:

(echo dm_cache
echo dm_cache_mq
echo dm_persistent_data
echo dm_bufio)|sudo dd status=none oflag=append conv=notrunc 
of=/initramfs-tools/modules

I wanted to write a single "char" to a file in /proc/sys/vm
(drop_caches).

Perhaps needless to say, I put it in a script file to save on typing.
:-)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1423796

Title:
  Unable to mount lvmcache root device at boot time

Status in initramfs-tools package in Ubuntu:
  Confirmed
Status in linux package in Ubuntu:
  Confirmed
Status in lvm2 package in Ubuntu:
  Confirmed

Bug description:
  I'm experimenting with Vivid Vervet on a virtual machine and tried a
  new LVM feature, lvmcache. I made a cache for the root file system,
  rebuilt the initrd and rebooted the VM.

  At boot time, the system failed to activate the root LV. After some
  investigation, I found out, it's because the initrd is missing some
  essential stuff needed for activating a cached LV.

  The initrd was missing the dm-cache module. I regenerated the initrd
  with explicitly listing dm-cache in /etc/initramfs-tools/modules, but
  the system still can't boot up, because now it is missing the
  /usr/sbin/cache_check utility.

  As SSDs are becoming more and more common, I think it will be common
  to use them as cache for root file systems, thus it is mandatory to
  make sure that an initrd can mount an lvmcached root device when
  necessary, preferably without /etc/initramfs-tools/modules and other
  manual initrd hacking.

  System details:

  Linux lvmvm 3.18.0-13-generic #14-Ubuntu SMP Fri Feb 6 09:55:14 UTC
  2015 x86_64 x86_64 x86_64 GNU/Linux

  Distributor ID:   Ubuntu
  Description:  Ubuntu Vivid Vervet (development branch)
  Release:  15.04
  Codename: vivid

  LVM version: 2.02.111(2) (2014-09-01)
  Library version: 1.02.90 (2014-09-01)
  Driver version:  4.28.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1423796/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1396213] Re: LVM VG is not activated during system boot

2015-03-20 Thread Astara
I don't get this bug.

I have at least 1 snapshot going on my /home partition all the time.

The VG that /home is in contains most of my partitions (26), with
2 more partitions on a separate (VG+PD's) VG.

Now, I've noticed when I am booting, it *does* take a bit of time to mount
bring up and mount all of the lvs, but you can the root mount is NOT
in an VG/LV -- It's on a regular device (numbers on left are w/kernel time
printing turned on -- so they are in seconds after boot):

[4.207621] XFS (sdc1): Mounting V4 Filesystem
[4.278746] XFS (sdc1): Starting recovery (logdev: internal)
[4.370757] XFS (sdc1): Ending recovery (logdev: internal)
[4.379839] VFS: Mounted root (xfs filesystem) on device 8:33.
..
[4.449462] devtmpfs: mounted
... last msg before my long pause where pretty much everything
get activated:
[4.591580] input: Dell Dell USB Keyboard as 
/devices/pci:00/:00:1a.7/usb1/1-3/1-3.2/1-3.2:1.0/0003:413C:2003.0002/input/input4
[4.604588] hid-generic 0003:413C:2003.0002: input,hidraw1: USB HID v1.10 
Keyboard [Dell Dell USB Keyboard] on usb-:00:1a.7-3.2/input0
[   19.331731] showconsole (170) used greatest stack depth: 13080 bytes left
[   19.412411] XFS (sdc6): Mounting V4 Filesystem
[   19.505374] XFS (sdc6): Ending clean mount
 more mostly unrelated messages... then you start seeing dm's mixed in
with the mounting messages -- just before kernel logging stops:

[   22.205351] XFS (sdc2): Mounting V4 Filesystem
[   22.205557] XFS (sdc3): Mounting V4 Filesystem
[   22.216414] XFS (dm-5): Mounting V4 Filesystem
[   22.217893] XFS (dm-6): Mounting V4 Filesystem
[   22.237345] XFS (dm-1): Mounting V4 Filesystem
[   22.245201] XFS (dm-8): Mounting V4 Filesystem
[   22.267971] XFS (dm-13): Mounting V4 Filesystem
[   22.293152] XFS (dm-15): Mounting V4 Filesystem
[   22.299737] XFS (sdc8): Mounting V4 Filesystem
[   22.340692] XFS (sdc2): Ending clean mount
[   22.373169] XFS (sdc3): Ending clean mount
[   22.401381] XFS (dm-5): Ending clean mount
[   22.463974] XFS (dm-13): Ending clean mount
[   22.474813] XFS (dm-1): Ending clean mount
[   22.494807] XFS (dm-8): Ending clean mount
[   22.505380] XFS (sdc8): Ending clean mount
[   22.544059] XFS (dm-15): Ending clean mount
[   22.557865] XFS (dm-6): Ending clean mount
[   22.836244] Adding 8393924k swap on /dev/sdc5.  Priority:-1 extents:1 
across:8393924k FS
Kernel logging (ksyslog) stopped.
Kernel log daemon terminating.
-
A couple of things different about my setup from the 'norm' -- 
1) since my distro(openSuSE) jumped to systemd, (and I haven't), I had to write 
some 
rc scripts to help bring up the system.
2) one reason for this was my /usr partition is separate from root and
my distro decided to move many libs/bins -usr and leave symlinks on the
root device to the programs in /usr.  One of those was 'mount' (and its 
associated libs).

That meant that once the rootfs was booted I had no way to mount /usr, where 
most
of the binaries are (I asked why they didn't do it the safe way and move most
of the binaries to /bin  /lib64 and put symlinks in /usr  but they evaded 
answering that question for ~2 years .  So one script I run after updating my 
system is a
dependency checker that checks mount orders and tries to make sure that early 
mounted disks don't have dependencies on later mounted disks.

3) adding to my problem was that I don't use an initrd to boot.  I boot
from my hard disk.  My distro folks thought they had solved the problem
by hiding the mount of /usr in the initrd, so when they start systemd to
control the boot, it is happy.  But if you boot from HD, I was told my
~15 year old configuration was  no longer supported.  Bleh!,

One thing that might account for speed diffs, is that I don't wait for
udev to start my VG's, ... and here is where I think I see my ~15 second
pause:

 if test -d /etc/lvm -a -x /sbin/vgscan -a -x /sbin/vgchange ; then
  # Waiting for udev to settle
  if [ $LVM_DEVICE_TIMEOUT -gt 0 ] ; then
echo Waiting for udev to settle...
/sbin/udevadm settle --timeout=$LVM_DEVICE_TIMEOUT
  fi
  echo Scanning for LVM volume groups...
  /sbin/vgscan --mknodes
  echo Activating LVM volume groups...
  /sbin/vgchange -a y $LVM_VGS_ACTIVATED_ON_BOOT
  mount -c -a -F
...
So at the point where I have a pause, I'm doing vgscan and vgchange, then
a first shot at mounting all (it was the easiest thing to fix/change).

Without that mount all attempt in my 4th boot script to execute -- boot.lvm, 
I often had long timeouts in the boot process.   But as you can see, I
tell mount to go fork(-F) and try to mount all FS's at the same time.  I'm
pretty sure that's where the pause is given that right after the pause,
XFS starts issuing messages about DM's being mounted.

Somewhere around script #8 is my distro's localfs mounts -- but for me,
that was way too late, since many of the boot utils not only used
/usr, but /usr/share (another partition after