** Description changed:

  SRU Justification
  
  [Impact]
  
  Secure boot instances of linux-azure require an EFI framebuffer in some
  cases in order for the VM to boot.
+ 
+ The issue was noticed in Ubuntu 18.04 linux-azure kernel, but actually
+ exists in the latest mainline kernel. The issue happens when the below
+ conditions are met:
+ 
+ hyperv_pci is built into the kernel and hyperv_fb is not, i.e., this means 
hyperv_pci loads before hyperv_fb loads.
+ CONFIG_FB_EFI is not defined, i.e., the efifb driver is not used.
+ 
+ Here is how the bug happens:
+ 
+ Linux VM starts, and vmbus_reserve_fb() reserves the VRAM [base=0xf8000000, 
length=8MB].
+ hyper-pci loads, gets MMIO [base=0xf8800000, lengh=8KB] as the bridge config 
window, and may get some other 64-bit MMIO ranges, and some 32-bit MMIO ranges 
(if needed.)
+ hyperv-fb loads, and gets MMIO [base=0xf8000000, lengh=8MB or a different 
length], and sets screen_info.lfb_base = 0.
+ VM panics.
+ The kdump kernel starts to run, and vmbus_reserve_fb() is not reserving 
[base=0xf8000000, length=8MB] due to the lfb_base==0.
+ hyperv-pci loads and gets [base=0xf8000000, lengh=8KB] and the host PCI VSP 
driver rejects this address as the bridge config window.
+ 
+ The crux of the problem is that Linux vmbus driver itself is unable to
+ detect the VRAM base/length (it looks like a video BIOS call is needed
+ to get this info and such a BIOS call is inappropriate or impossible in
+ hv_vmbus) and has to rely on screen_info.lfb_base (which is set by grub
+ or the kdump/kexec tool and can be reset to zero by hyperv_fb/drm).
+ 
+ Solution: Enable CONFIG_FB_EFI=y
  
  [Test Case]
  
  Microsoft tested. This config is also enabled on the master branch.
  
  [Where things could go wrong]
  
  VMs on certain instance types could fail to boot.
  
  [Other Info]
  
  SF: #00327005

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-azure-5.13 in Ubuntu.
https://bugs.launchpad.net/bugs/1959216

Title:
  linux-azure: CONFIG_FB_EFI=y

Status in linux-azure package in Ubuntu:
  Fix Committed
Status in linux-azure-4.15 package in Ubuntu:
  Invalid
Status in linux-azure-5.11 package in Ubuntu:
  Invalid
Status in linux-azure-5.13 package in Ubuntu:
  Invalid
Status in linux-azure source package in Bionic:
  In Progress
Status in linux-azure-4.15 source package in Bionic:
  In Progress
Status in linux-azure-5.11 source package in Bionic:
  Invalid
Status in linux-azure-5.13 source package in Bionic:
  Invalid
Status in linux-azure source package in Focal:
  In Progress
Status in linux-azure-4.15 source package in Focal:
  Invalid
Status in linux-azure-5.11 source package in Focal:
  In Progress
Status in linux-azure-5.13 source package in Focal:
  In Progress
Status in linux-azure source package in Impish:
  In Progress
Status in linux-azure-4.15 source package in Impish:
  Invalid
Status in linux-azure-5.11 source package in Impish:
  Invalid
Status in linux-azure-5.13 source package in Impish:
  Invalid
Status in linux-azure source package in Jammy:
  Fix Committed
Status in linux-azure-4.15 source package in Jammy:
  Invalid
Status in linux-azure-5.11 source package in Jammy:
  Invalid
Status in linux-azure-5.13 source package in Jammy:
  Invalid

Bug description:
  SRU Justification

  [Impact]

  Secure boot instances of linux-azure require an EFI framebuffer in
  some cases in order for the VM to boot.

  The issue was noticed in Ubuntu 18.04 linux-azure kernel, but actually
  exists in the latest mainline kernel. The issue happens when the below
  conditions are met:

  hyperv_pci is built into the kernel and hyperv_fb is not, i.e., this means 
hyperv_pci loads before hyperv_fb loads.
  CONFIG_FB_EFI is not defined, i.e., the efifb driver is not used.

  Here is how the bug happens:

  Linux VM starts, and vmbus_reserve_fb() reserves the VRAM [base=0xf8000000, 
length=8MB].
  hyper-pci loads, gets MMIO [base=0xf8800000, lengh=8KB] as the bridge config 
window, and may get some other 64-bit MMIO ranges, and some 32-bit MMIO ranges 
(if needed.)
  hyperv-fb loads, and gets MMIO [base=0xf8000000, lengh=8MB or a different 
length], and sets screen_info.lfb_base = 0.
  VM panics.
  The kdump kernel starts to run, and vmbus_reserve_fb() is not reserving 
[base=0xf8000000, length=8MB] due to the lfb_base==0.
  hyperv-pci loads and gets [base=0xf8000000, lengh=8KB] and the host PCI VSP 
driver rejects this address as the bridge config window.

  The crux of the problem is that Linux vmbus driver itself is unable to
  detect the VRAM base/length (it looks like a video BIOS call is needed
  to get this info and such a BIOS call is inappropriate or impossible
  in hv_vmbus) and has to rely on screen_info.lfb_base (which is set by
  grub or the kdump/kexec tool and can be reset to zero by
  hyperv_fb/drm).

  Solution: Enable CONFIG_FB_EFI=y

  [Test Case]

  Microsoft tested. This config is also enabled on the master branch.

  [Where things could go wrong]

  VMs on certain instance types could fail to boot.

  [Other Info]

  SF: #00327005

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1959216/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to