> From: Michael Kelley <[email protected]>
> Sent: Thursday, April 23, 2026 10:40 AM
Sorry for the late response! I got sidetracked by something else.
> > If vmbus_reserve_fb() in the kdump kernel fails to properly reserve the
>
> This problem has wider scope than just kdump. Any kexec'ed kernel would see
> the same problem, though kdump is probably the most common case. But the
> discussion here, and the mention of kdump in the code comments, should be
> adjusted accordingly.
Agreed. I'll post v2, which will use "kdump/kexec".
> > framebuffer MMIO range due to a Gen2 VM's screen.lfb_base being zero [1],
> > there is an MMIO conflict between the drivers hyperv_drm and pci-hyperv.
>
> You describe an MMIO "conflict" without giving the details. Is that
> intentional to keep the commit message from being too long? It might be
Yes.
> helpful to future readers to say a little more about how PCI devices must not
> use MMIO space that the hypervisor has assigned to the frame buffer.
Will do.
> As you noted in the detailed discussion in the other email thread [2],
> there's a Gen1 VM case that this patch doesn't fix. For completeness,
> perhaps that case should be called out in this commit message.
Will do.
> > + /* Hyper-V CoCo guests do not have a framebuffer device. */
> > + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
> > + return;
>
> This test is testing feature "A" (mem encryption) in order to determine
> the presence of feature "B" (no framebuffer), because current
> configurations happen to always have "A" and "B" at the same time. But
> the linkage between the features is tenuous, and if configurations should
> change in the future, testing this way could be bogus. It works now, but I'm
> leery of depending on the linkage between "A" and "B".
>
> You could set up a "can_have_framebuffer" flag in ms_hyperv_init_platform()
> if running in a CVM, and test that flag here. But I'd suggest just dropping
> this optimization. CVMs are always Gen2 (and that's not going to change),
> so they have plenty of low mmio space.
This is not true on a lab host, e.g. I have a TDX VM on a lab host created
by these 2 commands (without the 2nd command, Hyper-V won't allow
the TDX VM to start):
New-VM -Generation 2 -GuestStateIsolationType Tdx -Name $vmName
Disable-VMConsoleSupport -VMName $vmName
The low_mmio_base is still 4GB-128MB. In this case, it's not a good idea
to try to reserve the 128MB:
1) the available low MMIO size is smaller than 128MB due to the vTPM
MMIO range.
2) even if we can reserve the 109.25 low mmio range
[0xf8000000-0xfed3ffff], we may not want to do that, just in case
some assigned PCI device has 32-bit BARs.
So, IMO we need to keep the check:
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
+ return;
BTW, I think this may be a slightly better check here:
+ if (hv_is_isolation_supported())
+ return;
A CVM on Hyper-V won't start without the command line
Disable-VMConsoleSupport -VMName $vmName
IMO this is very unlikely to change in the future, because the Hyper-V
synthetic framebuffer VMBus device is not a trusted device for a CVM,
so there is no reason for Hyper-V to offer such a device to CVMs; even
if the host offers it, currently the guest hv_vmbus driver ignores it.
When we assign a physical PCI GPU device to a CVM, I'm not sure if there
is any framebuffer from the GPU or not. Even if there is, that's a completely
different scenario and not reserving some low MMIO for "framebuffer"
is unrelated: I think hyperv_drm (or the deprecated hyperv_fb) is the only
driver that sets the fb_overlap_ok parameter of vmbus_allocate_mmio().
> And at the moment, CVMs don't
> support PCI devices,
This is not true: recently I created a "Standard DC16eds v6" TDX CVM
on Azure, and I did see two NVMe local temporary disks in "nvme list"
(here TDISP is not used). In 2023, we added the commit
2c6ba4216844 ("PCI: hv: Enable PCI pass-thru devices in Confidential VMs")
and I believe some users are running CVMs with GPUs.
> so can't encounter a conflict (though conceivably
Correct, since there is no legacy or synthetic framebuffer device for CVMs.
> some new flavor of CVM in the future could support PCI devices).
>
> > +
> > if (efi_enabled(EFI_BOOT)) {
> > /* Gen2 VM: get FB base from EFI framebuffer */
> > if (IS_ENABLED(CONFIG_SYSFB)) {
> > start = sysfb_primary_display.screen.lfb_base;
> > size = max_t(__u32,
> sysfb_primary_display.screen.lfb_size, 0x800000);
> > +
> > + low_mmio_base = hyperv_mmio->start;
> > + if (!low_mmio_base || low_mmio_base >= SZ_4G ||
> > + (start && start < low_mmio_base)) {
> > + pr_warn("Unexpected low mmio base
> 0x%pa\n", &low_mmio_base);
> > + } else {
> > + /*
> > + * If the kdump kernel's lfb_base is 0,
>
> As mentioned earlier, this case isn't just kdump kernels.
Yes, the first kernel also runs here with a non-zero 'start'.
>
> > + * fall back to the low mmio base.
> > + */
> > + if (!start)
> > + start = low_mmio_base;
> > + /*
> > + * Reserve half of the space below 4GB for
> high
> > + * resolutions, but cap the reservation to
> 128MB.
> > + */
> > + size = min((SZ_4G - start) / 2, SZ_128M);
> > + }
> > }
> > } else {
> > /* Gen1 VM: get FB base from PCI */
> > @@ -2433,6 +2457,8 @@ static void __maybe_unused
> vmbus_reserve_fb(void)
> > */
> > for (; !fb_mmio && (size >= 0x100000); size >>= 1)
> > fb_mmio = __request_region(hyperv_mmio, start, size,
> fb_mmio_name, 0);
>
> Just above this "for" loop, "start" is tested for 0. This patch eliminates
> the main
> reason start might be 0. But I guess it's still possible that the legacy PCI
> device
> BAR might return 0 for a Gen1 VM?
IMO the legacy PCI BAR's base in a Gen1 VM can't be 0.
> Or you might get 0 if the pr_warn() about low
> mmio base is triggered. But I'm thinking maybe a pr_warn() should be done if
> start is zero.
Ok, will add a pr_warn() here.
> > +
> > + pr_info("hv_mmio=%pR,%pR fb=%pR\n", hyperv_mmio,
> hyperv_mmio->sibling, fb_mmio);
>
> Outputting the above info is nice!
>
> Michael
Thanks for all the good input! Will post v2 for review.
Thanks,
Dexuan