This bug is awaiting verification that the linux-kvm/4.15.0-1102.104
kernel in -proposed solves the problem. Please test the kernel and
update this bug with the results. If the problem is solved, change the
tag 'verification-needed-bionic' to 'verification-done-bionic'. If the
problem still exists, change the tag 'verification-needed-bionic' to
'verification-failed-bionic'.

If verification is not done by 5 working days from today, this fix will
be dropped from the source code, and this bug will be closed.

See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how
to enable and use -proposed. Thank you!


** Tags added: verification-needed-bionic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1946149

Title:
  Bionic/linux-aws Boot failure downgrading from Bionic/linux-aws-5.4 on
  r5.metal

Status in linux package in Ubuntu:
  Invalid
Status in linux source package in Bionic:
  Fix Released

Bug description:
  [ Impact ]
  The bionic 4.15 kernels are failing to boot on r5.metal instances on AWS. The 
default kernel is bionic/linux-aws-5.4(5.4.0-1056-aws), when changing to 
bionic/linux-aws(4.15.0-1113-aws) or bionic/linux (4.15.0-160.168) the machine 
fails to boot the 4.15 kernel.

  This problem only appears on metal instances, which uses NVME instead
  of XVDA devices.

  [ Fix ]
  It was discovered that after reverting the following two commits from 
upstream stable the 4.15 kernels can be booted again on the affected AWS metal 
instance:

  PCI/MSI: Enforce that MSI-X table entry is masked for update
  PCI/MSI: Enforce MSI[X] entry updates to be visible

  [ Test Case ]
  Deploy a r5.metal instance on AWS with a bionic image, which should boot 
initially with bionic/linux-aws-5.4. Install bionic/linux or bionic/linux-aws 
(4.15 based) and reboot the system.

  [ Where problems could occur ]
  These two commits are part of a larger patchset fixing PCI/MSI issues which 
were backported to some upstream stable releases. By reverting only part of the 
set we might end up with MSI issues that were not present before the whole set 
was applied. Regression potential can be minimized by testing the kernels with 
these two reverted patches on all the platforms available.

  [ Original Description ]
  When creating an r5.metal instance on AWS, the default kernel is 
bionic/linux-aws-5.4(5.4.0-1056-aws), when changing to 
bionic/linux-aws(4.15.0-1113-aws) the machine fails to boot the 4.15 kernel.

  If I remove these patches the instance correctly boots the 4.15 kernel

  https://lists.ubuntu.com/archives/kernel-
  team/2021-September/123963.html

  With that being said, after successfully updating to the 4.15 without
  those patches applied, I can then upgrade to a 4.15 kernel with the
  above patches included, and the instance will boot properly.

  This problem only appears on metal instances, which uses NVME instead
  of XVDA devices.

  AWS instances also use the 'discard' mount option with ext4, thought
  maybe there could be a race condition between ext4 discard and journal
  flush.  Removed 'discard' from mount options and rebooted 5.4 kernel
  prior to 4.15 kernel installation, but still wouldn't boot after
  installing the 4.15 kernel.

  I have been unable to capture a stack trace using 'aws get-console-
  output'. After enabling kdump I was unable to replicate the failure.
  So there must be some sort of race with either ext4 and/or nvme.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1946149/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to