I can also confirm that preseed/url= still does not work correctly.
The line
url_location="${x#url=}"
should be changed to
url_location="${x#*url=}"
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Same problem, here. The fix looks right to me.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1948362
Title:
UnboundLocalError: local variable 't' referenced before assignment
To manage
** Tags added: impish
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1942624
Title:
NVME "can't change power state from D3Cold to D0 (config space
inaccessible)"
To manage notifications about
** Attachment added: "dmesg-good.txt"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1942624/+attachment/5545039/+files/dmesg-good.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1942624
** Attachment added: "dmesg-bad.txt"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1942624/+attachment/5545038/+files/dmesg-bad.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1942624
** Attachment added: "lshw.txt"
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1942624/+attachment/5545037/+files/lshw.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1942624
Title:
I also encounter this bug in the impish 5.13 kernel. The machine boots,
but without one of the NVMe devices. This is on a very recent Razer
Blade 15 laptop (RZ09-0409x) with two NVMe drives. One with windows and
one with Ubuntu.
Upstream kernel 5.15 did not have this issue. I was able to bisect
@Jakuro: This just tricks udpkg in the install environment into thinking
that it is already at 2.31-0ubuntu9.3 so it doesn't try to update, and
therefore doesn't fail. It's is 9.* to catch 9 in the focal installer
and 9.2 in the focal-updates installer.
A simpler sed is probably sufficient, and
This introduced a bug in the debian-installer for focal (LP: #1926223)
While trying to update libc6, debian-installer will get the following
error
-/bin/sh: error loading shared libraries: __vdso_gettimeofday: invalid
mode for dlopen(): invalid argument
This behavior can be seen without using
This is caused by the libc update done in LP: #1914044
My workaround is to put this line in my early_command
sed -i -e '/Package: libc6-udeb/{N;N;s/Version: 2.31-0ubuntu9.*/Version:
2.31-0ubuntu9.3/}' /var/lib/dpkg/status
--
You received this bug notification because you are a member of Ubuntu
With 37-2ubuntu2.1
$ sudo grub-install
Installing for x86_64-efi platform.
grub-install: warning: Internal error.
grub-install: error: failed to register the EFI boot entry: Operation not
permitted.
And with 37-2ubuntu2.2
$ sudo grub-install
Installing for x86_64-efi platform.
Installation
This same bug now applies to the focal release as of the 37-2ubuntu2.1
package. However it does not segfault, the parser simply fails.
$ grub-install /dev/nvme0n1 --target=x86_64-efi --efi-directory=/boot/efi
Installing for x86_64-efi platform.
grub-install: warning: Internal error.
grub-install:
Attached lshw for the system I'm seeing this on. The root and boot
partitions are on an NVMe drive on a ASRock TRX40 Creator motherboard.
** Attachment added: "lshw.txt"
https://bugs.launchpad.net/ubuntu/+source/efivar/+bug/1904226/+attachment/5434191/+files/lshw.txt
--
You received this
Public bug reported:
When updating libefiboot1 from 37-2ubuntu2 to 37-2ubuntu2.1 on focal,
the following error now occurs.
$ grub-install /dev/nvme0n1 --target=x86_64-efi --efi-directory=/boot/efi
Installing for x86_64-efi platform.
grub-install: warning: Internal error.
grub-install: error:
Handled by #1867677
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862888
Title:
Boot hang on ASUS WS X299 SAGE
To manage notifications about this bug go to:
libnvidia-common-### is an Architecture: all package. I'm not sure this
was the best choice for where to put it. Obviously it doesn't matter too
much since only an amd64 package is actually built right now.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1862888
Title:
Boot hang on ASUS WS X299 SAGE
To manage notifications about this
Public bug reported:
[Impact]
Upstream kernels since 5.3 fail to boot on the ASUS WS X299 SAGE
motherboard with firmware version 1201. This is the result of an
infinite loop while trying to parse a malformed ACPI table. It is
suspected to affect other ASUS motherboards, but I have no first
hand
Public bug reported:
The 440 driver series introduced libnvidia-allocator.so in both 32 bit
and 64 bit versions, which is not present in any of the built packages.
As far as I can tell, this isn't actually used by any of the other
libraries presently. But, that will probably change in the
I've been using this as a workaround.
from distutils.sysconfig
print(distutils.sysconfig.get_python_inc())
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1739628
Title:
sysconfig paths are
Tested friendly-recovery 0.2.39ubuntu0.18.10.1 on Cosmic.
Same procedure as Bionic. Was able to reach recovery menu even after
running set-default.
apt-cache policy friendly-recovery
friendly-recovery:
Installed: 0.2.39ubuntu0.18.10.1
Candidate: 0.2.39ubuntu0.18.10.1
Version table:
***
Tested friendly-recovery 0.2.39ubuntu0.19.04.1 on Disco.
Same procedure as Bionic. Was able to reach recovery menu even after
running set-default.
apt-cache policy friendly-recovery
friendly-recovery:
Installed: 0.2.39ubuntu0.19.04.1
Candidate: 0.2.39ubuntu0.19.04.1
Version table:
***
Tested friendly-recovery 0.2.31ubuntu2.1 on Xenial.
Same procedure as Bionic. Reached recovery menu even after running set-
default.
apt-cache policy friendly-recovery
friendly-recovery:
Installed: 0.2.31ubuntu2.1
Candidate: 0.2.31ubuntu2.1
Version table:
*** 0.2.31ubuntu2.1 500
Tested friendly-recovery 0.2.38ubuntu1.1 on Bionic.
Ran systemctl set-default multi-user.target
Rebooted and selected recovery mode from GRUB
Got to the recovery menu, as expected.
apt-cache policy friendly-recovery
friendly-recovery:
Installed: 0.2.38ubuntu1.1
Candidate: 0.2.38ubuntu1.1
Slightly confused about the procedure here. This bug was introduced in
Debian in 0.2.39 as a fix for LP #1766872. This is the current version
in cosmic and disco, and the bug was backported into xenial and bionic.
I guess this should be tagged as regression-update? But it should also
be fixed in
I encounter this problem trying to install 18.04.2 in QEMU
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1752091
Title:
gvfsd-metadata[1703]: g_udev_device_has_property: assertion
Here's a patch with the above fix
** Patch added: "friendly-recovery-earlydir.patch"
https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/1821252/+attachment/5254782/+files/friendly-recovery-earlydir.patch
--
You received this bug notification because you are a member of Ubuntu
Any chance this could land in the xenial HWE kernel as well?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1815831
Title:
[ALSA] [PATCH] System76 darp5 and oryp5 fixups
To manage notifications
Public bug reported:
Fresh Ubuntu 18.04.2 server install
Try to boot to recovery mode from GRUB. Works correctly.
Use systemctl to set a different default, say systemctl set-default
multi-user.target
Try to boot to recovery mode from GRUB. End up at getty and not the
recovery menu.
Delete
Public bug reported:
Sep 25 16:58:17 server nvidia-persistenced[773]: Started (773)
Sep 25 16:58:17 server nvidia-persistenced[771]: nvidia-persistenced failed to
initialize. Check syslog for more details.
Sep 25 16:58:17 server nvidia-persistenced[773]: Failed to open
libnvidia-cfg.so.1:
This also applies to the new Threadrippers (2990WX, etc.)
Recent commits
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/drivers/hwmon/k10temp.c?id=484a84f25ca7817c3662001316ba7d1e06b74ae2
Worked for me.
System booted correctly after install. /etc/default/grub properly
configured.
dpkg --list | grep grub
ii grub-common2.02-2ubuntu8.1
amd64GRand Unified Bootloader (common files)
ii grub-efi-amd64
Public bug reported:
Regression caused by
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1778848
Steps to reproduce
1) Install ubuntu-18.04-desktop-amd64.iso in a VM using QEMU and OVMF
2) Reboot the VM
3) See GRUB shell instead of GDM
The system can be rescued by running
configfile
I've also experienced this bug. My workaround was to install resolvconf
as the oem user.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1777900
Title:
oem-config breaks the systemd resolved link for
I'm seeing this doing an EFI OEM install on 16.04.4 server. I've
attached the relevant part of the install syslog.
The problem is exactly as described in the original post. It is
installing oem-config-debconf, which depends on ubiquity, which
recommends grub-pc | grub | grub-efi. As grub-efi is
@Alberto
Commenting out the PrimaryGPU line worked for me in a 4 GPU system I have.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1756226
Title:
nvidia-driver-390 fails to start GUI
To manage
Public bug reported:
Newer versions of nvidia-modprobe create a device at /dev/nvidia-uvm-
tools. This was added in nvidia-modprobe 364.12
https://github.com/NVIDIA/nvidia-
modprobe/commit/e916feba7dbc362dbab9a6ec2081f9ae1049eb58
Missing this device causes issues with nvidia-docker, as seen at
37 matches
Mail list logo