Re: [DNG] Upgrade problem [ ascii -> beowulf ] failed to boot, left at initramfs shell -- with fix and query

2020-07-06 Thread Hendrik Boom
On Tue, Jul 07, 2020 at 02:00:38AM +1000, Andrew McGlashan via Dng wrote:
> Hi,
> 
> I had another "simple" server upgrade from Devuan Ascii to Devuan Beowulf, 
> these are the details and my work around for the problem.
> 
> 
> There was nothing particularly special about this server, it doesn't use 
> encrypted file systems; it started out life as a Debian Wheezy installation, 
> migrated to Devuan Jessie and
> later to Devuan Ascii and now Beowulf.
> 
> 
> The server has /boot on it's own RAID1 partition with another RAID1 volume 
> for the rest of the disk being an LVM2 volume group having a number of 
> logical volumes for root, swap,
> /usr/, /var/, /home/ and more.

Sounds just like my configuration.

> 
> 
> After the dist-upgrade, it failed to boot and remained at the ministrants 
> shell environment after having complained about not being able to find the 
> /usr file system via it's UUID.
> 
> It had another error as well which was fixed by allocating 25% to RUNSIZE 
> variable (up from 10%) in /etc/initramfs-tools/initramfs.conf
> 
> - it was unable to find "rm" when running the boot up scripts before 
> dumping itself to the initramfs shell.
> 
> 
> Once at the initramfs prompt after fixing the first problem, I was able to do 
> the following:
> 
> (initramfs) lvm
> 
> lvm> vgchange -ay
> 
> lvm> exit
> 
> (initramfs) exit
> 
> 
> And then the server would continue to boot properly.
> 
> 
> _The second fix, which I consider to be "clunky", was to adjust the 
> /usr/share/initramfs-tools/scripts/local-top/lvm2 file, adding in a line near 
> the bottom as highlighted_
> 
> activate "$ROOT"
> *activate "/dev/mapper/vg0-usr"*
> activate "$resume"
> 
> 
> Then I rebuilt the initramfs in the usual way.
> 
> update-initramfs -u -k all
> 
> 
> The original lvm2 script specifically only activated the root file system 
> (/dev/mapper/vg0-root), even though /usr (/dev/mapper/vg0-usr) was in the 
> exact same volume group as a
> separate file system, thus stopping boot from succeeding as expected.
> 
> Other volumes come online in due course okay.
> 
> 
> All was good with subsequent reboots.
> 
> 
> Now, cludge or clunky, this was required because the /usr file system was 
> [and continues to be] separate to the root file system and the initramfs only 
> cared to enable the root
> file system, leaving all other logical volumes as being "NOT AVAILABLE", 
> including /usr which was definitely required!
> 
> 
> Have I fixed this appropriately, or should I some how fix it another way?
> 

Doesn't systemd require a merged /usr partition?  It sounds as if a 
systemd-ism has crept into our boot process.

Fortunately I haven't upgraded my server to beowulf yet.

-- hendrik
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


[DNG] Upgrade problem [ ascii -> beowulf ] failed to boot, left at initramfs shell -- with fix and query

2020-07-06 Thread Andrew McGlashan via Dng
Hi,

I had another "simple" server upgrade from Devuan Ascii to Devuan Beowulf, 
these are the details and my work around for the problem.


There was nothing particularly special about this server, it doesn't use 
encrypted file systems; it started out life as a Debian Wheezy installation, 
migrated to Devuan Jessie and
later to Devuan Ascii and now Beowulf.


The server has /boot on it's own RAID1 partition with another RAID1 volume for 
the rest of the disk being an LVM2 volume group having a number of logical 
volumes for root, swap,
/usr/, /var/, /home/ and more.


After the dist-upgrade, it failed to boot and remained at the ministrants shell 
environment after having complained about not being able to find the /usr file 
system via it's UUID.

It had another error as well which was fixed by allocating 25% to RUNSIZE 
variable (up from 10%) in /etc/initramfs-tools/initramfs.conf

- it was unable to find "rm" when running the boot up scripts before 
dumping itself to the initramfs shell.


Once at the initramfs prompt after fixing the first problem, I was able to do 
the following:

(initramfs) lvm

lvm> vgchange -ay

lvm> exit

(initramfs) exit


And then the server would continue to boot properly.


_The second fix, which I consider to be "clunky", was to adjust the 
/usr/share/initramfs-tools/scripts/local-top/lvm2 file, adding in a line near 
the bottom as highlighted_

activate "$ROOT"
*activate "/dev/mapper/vg0-usr"*
activate "$resume"


Then I rebuilt the initramfs in the usual way.

update-initramfs -u -k all


The original lvm2 script specifically only activated the root file system 
(/dev/mapper/vg0-root), even though /usr (/dev/mapper/vg0-usr) was in the exact 
same volume group as a
separate file system, thus stopping boot from succeeding as expected.

Other volumes come online in due course okay.


All was good with subsequent reboots.


Now, cludge or clunky, this was required because the /usr file system was [and 
continues to be] separate to the root file system and the initramfs only cared 
to enable the root
file system, leaving all other logical volumes as being "NOT AVAILABLE", 
including /usr which was definitely required!


Have I fixed this appropriately, or should I some how fix it another way?


Kind Regards
AndrewM




signature.asc
Description: OpenPGP digital signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng