General update on my situation: Just before knocking off for the day, I tried telling the installer to finish up without installing a boot loader, then used the netinst USB stick as a rescue disk to boot my installed system. From there, I was able to use the regular grub-install program (instead of the install environment's grub-installer script).
grub-install ran with no errors and I now have grub in the (separate, non-RAIDed) EFI partitions of both nvme drives... but it's still not directly bootable. Trying to boot directly from nvme drops me into a grub command line. Thinking that this may be an initrd or similar issue, I tried reinstalling the kernel image and grub-efi-amd64 packages, so that their postinst scripts would run and rebuild initrd and reinstall grub to the drives, but that had no effect. Still just getting the grub shell when I boot from nvme. Any tips on making use of the grub shell to make further progress, such as getting the system to boot in non-rescue mode (i.e., not chrooted from the installer)? The help information available in the grub shell itself isn't terribly useful because it scrolls off the screen with no (obvious) pager or scrollback buffer. Alternately, suggestions for things I can try in the chroot environment would also be good, since that's considerably less restrictive than the installer environment I was trying to work within yesterday. I guess the most obvious explanation for the current grub issues is that grub isn't smart enough to boot an mdadm filesystem directly and I need to repartition with a non-RAID /boot, but I don't consider that a desirable solution, since it would then leave me with no /boot if the device holding that partition dies. (No new text below this point, just my original post for context.) On Tue, Mar 02, 2021 at 05:57:37AM -0600, Dave Sherohman wrote: > I've got a new server and am currently fighting with the Debian 10 > installer (build 20190702) in my attempts to get it up and running. > After much wailing and gnashing of teeth, I managed to get it to stop > complaining about being unble to mount /boot/efi and complete the > "Partition disks" step successfully, but now I'm completely stuck on the > "Install GRUB" step. > > The GUI installer shows the error: > > Unable to install GRUB in dummy > Executing 'grub-install dummy' failed. > > Checking the syslog output on virtual console 4 shows a bunch of > os-prober activity (as expected), then finally: > > Installing for x64_64-efi platform. > grub-install: error: failed to get canonical path of `/dev/nvme0n1p1`. > error: Running 'grub-install --force "dummy"' failed. > > I assume that the "canonical path" it's looking for is a /dev/sda-type > device name, but I have no idea how to assign one of those to an nvme > drive. (And I thought that kind of name was supposed to have been > banished in favor of "predictable" names by now anyhow.) > > Several of the old-style names are already in use; the installer is > showing sda for the USB stick that the installation was booted from and > sdb-sdi for eight large disks, but the operating system is to boot and > run from the drives /dev/nvme0n1 and /dev/nvme1n1. > > What do I need to do to get this working? > > Also, would that solution also work for md devices as well as for nvme > devices? I had previously tried putting UEFI onto a RAID1 mirror > between the two nvme drives, but got a similar error from grub about not > being able to find the canonical path of /dev/md1. > > -- > Dave Sherohman > -- Dave Sherohman